title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
listlengths
1
5.62k
url
stringlengths
79
342
Part II. Integrating Red Hat Fuse with Red Hat Process Automation Manager
Part II. Integrating Red Hat Fuse with Red Hat Process Automation Manager As a system administrator, you can integrate Red Hat Process Automation Manager with Red Hat Fuse on Red Hat JBoss Enterprise Application Platform to facilitate communication between integrated services.
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/integrating_red_hat_process_automation_manager_with_other_products_and_components/assembly-integrating-fuse
3.4. Multi-port Services and Load Balancer Add-On
3.4. Multi-port Services and Load Balancer Add-On LVS routers under any topology require extra configuration when creating multi-port Load Balancer Add-On services. Multi-port services can be created artificially by using firewall marks to bundle together different, but related protocols, such as HTTP (port 80) and HTTPS (port 443), or when Load Balancer Add-On is used with true multi-port protocols, such as FTP. In either case, the LVS router uses firewall marks to recognize that packets destined for different ports, but bearing the same firewall mark, should be handled identically. Also, when combined with persistence, firewall marks ensure connections from the client machine are routed to the same host, as long as the connections occur within the length of time specified by the persistence parameter. For more on assigning persistence to a virtual server, see Section 4.6.1, "The VIRTUAL SERVER Subsection" . Unfortunately, the mechanism used to balance the loads on the real servers - IPVS - can recognize the firewall marks assigned to a packet, but cannot itself assign firewall marks. The job of assigning firewall marks must be performed by the network packet filter, iptables , outside of the Piranha Configuration Tool . 3.4.1. Assigning Firewall Marks To assign firewall marks to a packet destined for a particular port, the administrator must use iptables . This section illustrates how to bundle HTTP and HTTPS as an example; however, FTP is another commonly clustered multi-port protocol. If an Load Balancer Add-On is used for FTP services, see Section 3.5, "Configuring FTP" for configuration details. The basic rule to remember when using firewall marks is that for every protocol using a firewall mark in the Piranha Configuration Tool there must be a commensurate iptables rule to assign marks to the network packets. Before creating network packet filter rules, make sure there are no rules already in place. To do this, open a shell prompt, login as root, and type: /sbin/service iptables status If iptables is not running, the prompt will instantly reappear. If iptables is active, it displays a set of rules. If rules are present, type the following command: /sbin/service iptables stop If the rules already in place are important, check the contents of /etc/sysconfig/iptables and copy any rules worth keeping to a safe place before proceeding. Below are rules which assign the same firewall mark, 80, to incoming traffic destined for the floating IP address, n.n.n.n , on ports 80 and 443. For instructions on assigning the VIP to the public network interface, see Section 4.6.1, "The VIRTUAL SERVER Subsection" . Also note that you must log in as root and load the module for iptables before issuing rules for the first time. In the above iptables commands, n.n.n.n should be replaced with the floating IP for your HTTP and HTTPS virtual servers. These commands have the net effect of assigning any traffic addressed to the VIP on the appropriate ports a firewall mark of 80, which in turn is recognized by IPVS and forwarded appropriately. Warning The commands above will take effect immediately, but do not persist through a reboot of the system. To ensure network packet filter settings are restored upon reboot, see Section 3.6, "Saving Network Packet Filter Settings"
[ "/sbin/iptables -t mangle -A PREROUTING -p tcp -d n.n.n.n/32 -m multiport --dports 80,443 -j MARK --set-mark 80" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/load_balancer_administration/s1-lvs-multi-vsa
Preface
Preface Red Hat OpenShift Data Foundation supports deployment on existing Red Hat OpenShift Container Platform (RHOCP) AWS clusters in connected or disconnected environments along with out-of-the-box support for proxy environments. Note Only internal OpenShift Data Foundation clusters are supported on AWS. See Planning your deployment and Preparing to deploy OpenShift Data Foundation for more information about deployment requirements. To deploy OpenShift Data Foundation, start with the requirements in Preparing to deploy OpenShift Data Foundation chapter and then follow the deployment process for your environment based on your requirement: Deploy using dynamic storage devices Deploy standalone Multicloud Object Gateway component
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/deploying_openshift_data_foundation_using_amazon_web_services/preface-aws
Chapter 1. OpenShift Container Platform 4.12 Documentation
Chapter 1. OpenShift Container Platform 4.12 Documentation Welcome to the official OpenShift Container Platform 4.12 documentation, where you can learn about OpenShift Container Platform and start exploring its features. To navigate the OpenShift Container Platform 4.12 documentation, you can use one of the following methods: Use the left navigation bar to browse the documentation. Select the task that interests you from the contents of this Welcome page. Start with Architecture and Security and compliance . Then, see the release notes . 1.1. Cluster installer activities Explore these OpenShift Container Platform installation tasks. OpenShift Container Platform installation overview : You can install OpenShift Container Platform on installer-provisioned or user-provisioned infrastructure. The OpenShift Container Platform installation program provides the flexibility to deploy OpenShift Container Platform on a range of different platforms. Install a cluster on Alibaba : You can install OpenShift Container Platform on Alibaba Cloud on installer-provisioned infrastructure. This is currently a Technology Preview feature only. Install a cluster on AWS : You have many installation options when you deploy a cluster on Amazon Web Services (AWS). You can deploy clusters with default settings or custom AWS settings . You can also deploy a cluster on AWS infrastructure that you provisioned yourself. You can modify the provided AWS CloudFormation templates to meet your needs. Install a cluster on Azure : You can deploy clusters with default settings , custom Azure settings , or custom networking settings in Microsoft Azure. You can also provision OpenShift Container Platform into an Azure Virtual Network or use Azure Resource Manager Templates to provision your own infrastructure. Install a cluster on Azure Stack Hub : You can install OpenShift Container Platform on Azure Stack Hub on installer-provisioned infrastructure. Install a cluster on GCP : You can deploy clusters with default settings or custom GCP settings on Google Cloud Platform (GCP). You can also perform a GCP installation where you provision your own infrastructure. Install a cluster on IBM Cloud VPC : You can install OpenShift Container Platform on IBM Cloud VPC on installer-provisioned infrastructure. Install a cluster on IBM Power : You can install OpenShift Container Platform on IBM Power on user-provisioned infrastructure. Install a cluster on VMware vSphere : You can install OpenShift Container Platform on supported versions of vSphere. Install a cluster on VMware Cloud : You can install OpenShift Container Platform on supported versions of VMware Cloud (VMC) on AWS. Install a cluster with z/VM on IBM Z and IBM(R) LinuxONE : You can install OpenShift Container Platform with z/VM on IBM Z and IBM(R) LinuxONE on user-provisioned infrastructure. Install a cluster with RHEL KVM on IBM Z and IBM(R) LinuxONE : You can install OpenShift Container Platform with RHEL KVM on IBM Z and IBM(R) LinuxONE on user-provisioned infrastructure. Install an installer-provisioned cluster on bare metal : You can install OpenShift Container Platform on bare metal with an installer-provisioned architecture. Install a user-provisioned cluster on bare metal : If none of the available platform and cloud provider deployment options meet your needs, you can install OpenShift Container Platform on user-provisioned bare metal infrastructure. Install a cluster on Red Hat OpenStack Platform (RHOSP) : You can install a cluster on RHOSP with customizations , with network customizations , or on a restricted network on installer-provisioned infrastructure. You can install a cluster on RHOSP with customizations or with network customizations on user-provisioned infrastructure. Install a cluster on Red Hat Virtualization (RHV) : You can deploy clusters on Red Hat Virtualization (RHV) with a quick install or an install with customizations . Install a cluster in a restricted network : If your cluster that uses user-provisioned infrastructure on AWS , GCP , vSphere , IBM Z and IBM(R) LinuxONE with z/VM , IBM Z and IBM(R) LinuxONE with RHEL KVM , IBM Power , or bare metal does not have full access to the internet, then mirror the OpenShift Container Platform installation images using one of the following methods and install a cluster in a restricted network. Mirroring images for a disconnected installation Mirroring images for a disconnected installation using the oc-mirror plug-in Install a cluster in an existing network : If you use an existing Virtual Private Cloud (VPC) in AWS or GCP or an existing VNet on Azure, you can install a cluster. Install a private cluster : If your cluster does not require external internet access, you can install a private cluster on AWS , Azure , GCP , or IBM Cloud VPC Internet access is still required to access the cloud APIs and installation media. Check installation logs : Access installation logs to evaluate issues that occur during OpenShift Container Platform installation. Access OpenShift Container Platform : Use credentials output at the end of the installation process to log in to the OpenShift Container Platform cluster from the command line or web console. Install Red Hat OpenShift Data Foundation : You can install Red Hat OpenShift Data Foundation as an Operator to provide highly integrated and simplified persistent storage management for containers. Install a cluster on Nutanix : You can install a cluster on your Nutanix instance that uses installer-provisioned infrastructure. This type of installation lets you use the installation program to deploy a cluster on infrastructure that the installation program provisions and the cluster maintains. Red Hat Enterprise Linux CoreOS (RHCOS) image layering allows you to add new images on top of the base RHCOS image. This layering does not modify the base RHCOS image. Instead, it creates a custom layered image that includes all RHCOS functionality and adds additional functionality to specific nodes in the cluster. 1.2. Developer activities Develop and deploy containerized applications with OpenShift Container Platform. OpenShift Container Platform is a platform for developing and deploying containerized applications. OpenShift Container Platform documentation helps you: Understand OpenShift Container Platform development : Learn the different types of containerized applications, from simple containers to advanced Kubernetes deployments and Operators. Work with projects : Create projects from the OpenShift Container Platform web console or OpenShift CLI ( oc ) to organize and share the software you develop. Work with applications : Use the Developer perspective in the OpenShift Container Platform web console to create and deploy applications . Use the Topology view to see your applications, monitor status, connect and group components, and modify your code base. Connect your workloads to backing services : The Service Binding Operator enables application developers to easily bind workloads with Operator-managed backing services by automatically collecting and sharing binding data with the workloads. The Service Binding Operator improves the development lifecycle with a consistent and declarative service binding method that prevents discrepancies in cluster environments. Use the developer CLI tool ( odo ) : The odo CLI tool lets developers create single or multi-component applications easily and automates deployment, build, and service route configurations. It abstracts complex Kubernetes and OpenShift Container Platform concepts, allowing you to focus on developing your applications. Create CI/CD Pipelines : Pipelines are serverless, cloud-native, continuous integration and continuous deployment systems that run in isolated containers. Pipelines use standard Tekton custom resources to automate deployments and are designed for decentralized teams that work on microservice-based architecture. Manage your infrastructure and application configurations : GitOps is a declarative way to implement continuous deployment for cloud native applications. GitOps defines infrastructure and application definitions as code. GitOps uses this code to manage multiple workspaces and clusters to simplify the creation of infrastructure and application configurations. GitOps also handles and automates complex deployments at a fast pace, which saves time during deployment and release cycles. Deploy Helm charts : Helm is a software package manager that simplifies deployment of applications and services to OpenShift Container Platform clusters. Helm uses a packaging format called charts. A Helm chart is a collection of files that describes the OpenShift Container Platform resources. Understand image builds : Choose from different build strategies (Docker, S2I, custom, and pipeline) that can include different kinds of source materials (from places like Git repositories, local binary inputs, and external artifacts). Then, follow examples of build types from basic builds to advanced builds. Create container images : A container image is the most basic building block in OpenShift Container Platform (and Kubernetes) applications. Defining image streams lets you gather multiple versions of an image in one place as you continue its development. S2I containers let you insert your source code into a base container that is set up to run code of a particular type, such as Ruby, Node.js, or Python. Create deployments : Use Deployment and DeploymentConfig objects to exert fine-grained management over applications. Manage deployments using the Workloads page or OpenShift CLI ( oc ). Learn rolling, recreate, and custom deployment strategies. Create templates : Use existing templates or create your own templates that describe how an application is built or deployed. A template can combine images with descriptions, parameters, replicas, exposed ports and other content that defines how an application can be run or built. Understand Operators : Operators are the preferred method for creating on-cluster applications for OpenShift Container Platform 4.12. Learn about the Operator Framework and how to deploy applications using installed Operators into your projects. Develop Operators : Operators are the preferred method for creating on-cluster applications for OpenShift Container Platform 4.12. Learn the workflow for building, testing, and deploying Operators. Then, create your own Operators based on Ansible or Helm , or configure built-in Prometheus monitoring using the Operator SDK. REST API reference : Learn about OpenShift Container Platform application programming interface endpoints. 1.3. Cluster administrator activities As a cluster administrator for OpenShift Container Platform, this documentation helps you: Understand OpenShift Container Platform management : Learn about components of the OpenShift Container Platform 4.12 control plane. See how OpenShift Container Platform control plane and compute nodes are managed and updated through the Machine API and Operators . Enable cluster capabilities that were disabled prior to installation Cluster administrators can enable cluster capabilities that were disabled prior to installation. For more information, see Enabling cluster capabilities . 1.3.1. Manage cluster components Manage machines : Manage compute and control plane machines in your cluster with machine sets, by deploying health checks , and applying autoscaling . Manage container registries : Each OpenShift Container Platform cluster includes a built-in container registry for storing its images. You can also configure a separate Red Hat Quay registry to use with OpenShift Container Platform. The Quay.io web site provides a public container registry that stores OpenShift Container Platform containers and Operators. Manage users and groups : Add users and groups with different levels of permissions to use or modify clusters. Manage authentication : Learn how user, group, and API authentication works in OpenShift Container Platform. OpenShift Container Platform supports multiple identity providers . Manage ingress , API server , and service certificates : OpenShift Container Platform creates certificates by default for the Ingress Operator, the API server, and for services needed by complex middleware applications that require encryption. You might need to change, add, or rotate these certificates. Manage networking : The cluster network in OpenShift Container Platform is managed by the Cluster Network Operator (CNO). The CNO uses iptables rules in kube-proxy to direct traffic between nodes and pods running on those nodes. The Multus Container Network Interface adds the capability to attach multiple network interfaces to a pod. Using network policy features, you can isolate your pods or permit selected traffic. Manage storage : OpenShift Container Platform allows cluster administrators to configure persistent storage using Red Hat OpenShift Data Foundation , AWS Elastic Block Store , NFS , iSCSI , Container Storage Interface (CSI) , and more. You can expand persistent volumes , configure dynamic provisioning , and use CSI to configure , clone , and use snapshots of persistent storage. Manage Operators : Lists of Red Hat, ISV, and community Operators can be reviewed by cluster administrators and installed on their clusters . After you install them, you can run , upgrade , back up, or otherwise manage the Operator on your cluster. 1.3.2. Change cluster components Use custom resource definitions (CRDs) to modify the cluster : Cluster features implemented with Operators can be modified with CRDs. Learn to create a CRD and manage resources from CRDs . Set resource quotas : Choose from CPU, memory, and other system resources to set quotas . Prune and reclaim resources : Reclaim space by pruning unneeded Operators, groups, deployments, builds, images, registries, and cron jobs. Scale and tune clusters : Set cluster limits, tune nodes, scale cluster monitoring, and optimize networking, storage, and routes for your environment. Update a cluster : Use the Cluster Version Operator (CVO) to upgrade your OpenShift Container Platform cluster. If an update is available from the OpenShift Update Service (OSUS), you apply that cluster update from either the OpenShift Container Platform web console or the OpenShift CLI ( oc ). Understanding the OpenShift Update Service : Learn about installing and managing a local OpenShift Update Service for recommending OpenShift Container Platform updates in disconnected environments. Improving cluster stability in high latency environments using worker latency profiles : If your network has latency issues, you can use one of three worker latency profiles to help ensure that your control plane does not accidentally evict pods in case it cannot reach a worker node. You can configure or modify the profile at any time during the life of the cluster. 1.3.3. Monitor the cluster Work with OpenShift Logging : Learn about OpenShift Logging and configure different OpenShift Logging types, such as Elasticsearch, Fluentd, and Kibana. Red Hat OpenShift distributed tracing platform : Store and visualize large volumes of requests passing through distributed systems, across the whole stack of microservices, and under heavy loads. Use the distributed tracing platform for monitoring distributed transactions, gathering insights into your instrumented services, network profiling, performance and latency optimization, root cause analysis, and troubleshooting the interaction between components in modern cloud-native microservices-based applications. Red Hat build of OpenTelemetry : Instrument, generate, collect, and export telemetry traces, metrics, and logs to analyze and understand your software's performance and behavior. Use open source backends like Tempo or Prometheus, or use commercial offerings. Learn a single set of APIs and conventions, and own the data that you generate. Network Observability : Observe network traffic for OpenShift Container Platform clusters by using eBPF technology to create and enrich network flows. You can view dashboards, customize alerts , and analyze network flow information for further insight and troubleshooting. In-cluster monitoring : Learn to configure the monitoring stack . After configuring monitoring, use the web console to access monitoring dashboards . In addition to infrastructure metrics, you can also scrape and view metrics for your own services. Remote health monitoring : OpenShift Container Platform collects anonymized aggregated information about your cluster. Using Telemetry and the Insights Operator, this data is received by Red Hat and used to improve OpenShift Container Platform. You can view the data collected by remote health monitoring .
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/about/welcome-index
Chapter 27. JvmOptions schema reference
Chapter 27. JvmOptions schema reference Used in: CruiseControlSpec , EntityTopicOperatorSpec , EntityUserOperatorSpec , KafkaBridgeSpec , KafkaClusterSpec , KafkaConnectSpec , KafkaMirrorMaker2Spec , KafkaMirrorMakerSpec , KafkaNodePoolSpec , ZookeeperClusterSpec Property Description -XX A map of -XX options to the JVM. map -Xms -Xms option to to the JVM. string -Xmx -Xmx option to to the JVM. string gcLoggingEnabled Specifies whether the Garbage Collection logging is enabled. The default is false. boolean javaSystemProperties A map of additional system properties which will be passed using the -D option to the JVM. SystemProperty array
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-JvmOptions-reference
Chapter 9. Installation configuration parameters for IBM Power Virtual Server
Chapter 9. Installation configuration parameters for IBM Power Virtual Server Before you deploy an OpenShift Container Platform on IBM Power(R) Virtual Server, you provide parameters to customize your cluster and the platform that hosts it. When you create the install-config.yaml file, you provide values for the required parameters through the command line. You can then modify the install-config.yaml file to customize your cluster further. 9.1. Available installation configuration parameters for IBM Power Virtual Server The following tables specify the required, optional, and IBM Power Virtual Server-specific installation configuration parameters that you can set as part of the installation process. Note After installation, you cannot modify these parameters in the install-config.yaml file. 9.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 9.1. Required parameters Parameter Description Values The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object Get a pull secret from Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } The UserID is the login for the user's IBM Cloud(R) account. String. For example, existing_user_id . The PowerVSResourceGroup is the resource group in which IBM Power(R) Virtual Server resources are created. If using an existing VPC, the existing VPC and subnets should be in this resource group. String. For example, existing_resource_group . Specifies the IBM Cloud(R) colo region where the cluster will be created. String. For example, existing_region . Specifies the IBM Cloud(R) colo region where the cluster will be created. String. For example, existing_zone . The ServiceInstanceID is the ID of the Power IAAS instance created from the IBM Cloud(R) Catalog. String. For example, existing_service_instance_ID . 9.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 9.2. Network parameters Parameter Description Values The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. The Red Hat OpenShift Networking network plugin to install. The default value is OVNKubernetes . The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 The IP address blocks for machines. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt and IBM Power(R) Virtual Server. For libvirt, the default value is 192.168.126.0/24 . For IBM Power(R) Virtual Server, the default value is 192.168.0.0/24 . An IP network block in CIDR notation. For example, 192.168.0.0/24 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 9.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 9.3. Optional parameters Parameter Description Values A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array Enables workload partitioning, which isolates OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. Workload partitioning can only be enabled during installation and cannot be disabled after installation. While this field enables workload partitioning, it does not configure workloads to use specific CPUs. For more information, see the Workload partitioning page in the Scalability and Performance section. None or AllNodes . None is the default value. The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are ppc64le (the default). String Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use compute . The name of the machine pool. worker Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. Example usage, compute.platform.powervs.sysType . alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . The configuration for the machines that comprise the control plane. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are ppc64le (the default). String Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use controlPlane . The name of the machine pool. master Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. Example usage, controlPlane.platform.powervs.processors . alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of control plane machines to provision. Supported values are 3 , or 1 when deploying single-node OpenShift. The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Mint , Passthrough , Manual or an empty string ( "" ). [1] Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String Specify one or more repositories that may also contain the same images. Array of strings How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . Specifies the IBM Cloud(R) region in which to create VPC resources. String. For example, existing_vpc_region . Specifies existing subnets (by name) where cluster resources will be created. String. For example, powervs_region_example_subnet . Specifies the IBM Cloud(R) name. String. For example, existing_vpcName . The CloudConnectionName is the name of an existing PowerVS Cloud connection. String. For example, existing_cloudConnectionName . Specifies a pre-created IBM Power(R) Virtual Server boot image that overrides the default image for cluster nodes. String. For example, existing_cluster_os_image . Specifies the default configuration used when installing on IBM Power(R) Virtual Server for machine pools that do not define their own platform configuration. String. For example, existing_machine_platform . Specifies the size of a virtual machine's memory, in GB. The valid integer must be an integer number of GB that is at least 2 and no more than 64, depending on the machine type. Defines the processor sharing model for the instance. The valid values are Capped, Dedicated, and Shared. Defines the processing units for the instance. The number of processors must be from .5 to 32 cores. The processors must be in increments of .25. Defines the system type for the instance. The system type must be e980 , s922 , e1080 , or s1022 . The available system types depend on the zone you want to target. Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the "Managing cloud provider credentials" entry in the Authentication and authorization content. Note Cloud connections are no longer supported in the install-config.yaml while deploying in the dal10 region, as they have been replaced by the Power Edge Router (PER).
[ "apiVersion:", "baseDomain:", "metadata:", "metadata: name:", "platform:", "pullSecret:", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "platform: powervs: userID:", "platform: powervs: powervsResourceGroup:", "platform: powervs: region:", "platform: powervs: zone:", "platform: powervs: serviceInstanceID:", "networking:", "networking: networkType:", "networking: clusterNetwork:", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: clusterNetwork: cidr:", "networking: clusterNetwork: hostPrefix:", "networking: serviceNetwork:", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork:", "networking: machineNetwork: - cidr: 10.0.0.0/16", "networking: machineNetwork: cidr:", "additionalTrustBundle:", "capabilities:", "capabilities: baselineCapabilitySet:", "capabilities: additionalEnabledCapabilities:", "cpuPartitioningMode:", "compute:", "compute: architecture:", "compute: hyperthreading:", "compute: name:", "compute: platform:", "compute: replicas:", "featureSet:", "controlPlane:", "controlPlane: architecture:", "controlPlane: hyperthreading:", "controlPlane: name:", "controlPlane: platform:", "controlPlane: replicas:", "credentialsMode:", "imageContentSources:", "imageContentSources: source:", "imageContentSources: mirrors:", "publish:", "sshKey:", "platform: powervs: vpcRegion:", "platform: powervs: vpcSubnets:", "platform: powervs: vpcName:", "platform: powervs: cloudConnectionName:", "platform: powervs: clusterOSImage:", "platform: powervs: defaultMachinePlatform:", "platform: powervs: memoryGiB:", "platform: powervs: procType:", "platform: powervs: processors:", "platform: powervs: sysType:" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_on_ibm_power_virtual_server/installation-config-parameters-ibm-power-vs
Chapter 26. Identity Management
Chapter 26. Identity Management Identity Management ( IdM ) provides a unifying environment for standards-defined, common network services, including PAM, LDAP, Kerberos, DNS, NTP, and certificate services. IdM allows Red Hat Enterprise Linux systems to serve as domain controllers. [25] In Red Hat Enterprise Linux, the ipa-server package provides the IdM server. Enter the following command to see if the ipa-server package is installed: If it is not installed, enter the following command as the root user to install it: 26.1. Identity Management and SELinux Identity Management can map IdM users to configured SELinux roles per host so that it is possible to specify SELinux context for IdM access rights. During the user login process, the System Security Services Daemon ( SSSD ) queries the access rights defined for a particular IdM user. Then the pam_selinux module sends a request to the kernel to launch the user process with the proper SELinux context according to the IdM access rights, for example guest_u:guest_r:guest_t:s0 . For more information about Identity Management and SELinux, see the Linux Domain, Identity, Authentication, and Policy Guide for Red Hat Enterprise Linux 7. 26.1.1. Trust to Active Directory Domains In versions of Red Hat Enterprise Linux, Identity Management used the WinSync utility to allow users from Active Directory ( AD ) domains to access data stored on IdM domains. To do that, WinSync had to replicate the user and group data from the AD server to the local server and kept the data synchronized. In Red Hat Enterprise Linux 7, the SSSD daemon has been enhanced to work with AD and users are able to create a trusted relationship between IdM and AD domains. The user and group data are read directly from the AD server. Additionally, Kerberos cross-realm trust allowing single sign-on ( SSO ) authentication between the AD and IdM domains is provided. If SSO is set, users from the AD domains can access data protected by Kerberos that is stored on the IdM domains without requiring a password. This feature is not installed by default. To use it, install the additional ipa-server-trust-ad package. [25] For more information about Identity Management, see the Linux Domain, Identity, Authentication, and Policy Guide for Red Hat Enterprise Linux 7.
[ "~]USD rpm -q ipa-server package ipa-server is not installed", "~]# yum install ipa-server" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/selinux_users_and_administrators_guide/chap-managing_confined_services-identity_management
Chapter 2. New features and enhancements
Chapter 2. New features and enhancements 2.1. Ability to apply Visual Studio Code editor configurations from a ConfigMap With this release, you can apply specific configuration properties to the Visual Studio Code - Open Source ("Code - OSS") editor using a dedicated ConfigMap: apiVersion: v1 kind: ConfigMap metadata: name: vscode-editor-configurations data: extensions.json: | { "recommendations": [ "dbaeumer.vscode-eslint", "github.vscode-pull-request-github" ] } settings.json: | { "window.header": "SOME HEADER MESSAGE", "window.commandCenter": false, "workbench.colorCustomizations": { "titleBar.activeBackground": "#CCA700", "titleBar.activeForeground": "#ffffff" } } immutable: false Learn more about this feature in the official documentation . Additional resources CRW-7258 2.2. UI/UX enhancements of the editor tiles on the User Dashboard The editor tiles displayed on the User Dashboard, including the license and version information, received a UI/UX enhancement. Additional resources CRW-7845 2.3. Display the full content of the gitconfig file on the User Dashboard With this release, you can view the full content of the .gitconfig file. Access it on the User Preferences Gitconfig tab by clicking Switch to Viewer . Additional resources CRW-8210 2.4. Detect support for fuse-overlayfs for Universal Developer Image Starting from this release, fuse-overlayfs will be detected automatically for the default Universal Developer Image. Additional resources CRW-8211 2.5. Configuring workspace endpoints base domain With this release, the official documentation for configuring workspace endpoints base domain is available. Additional resources CRW-8212 2.6. Persistent user home documentation With this release, the official documentation for persisting the /home/user directory across workspace restarts is available. Additional resources CRW-8213 2.7. Configuring proxy setting for https_proxy, http_proxy and no_proxy The official documentation that explains how to configure proxy settings is available. Additional resources CRW-8215 2.8. Allow to configure securityContext for the gateway container Starting from this release, the securityContext set in the CheCluster Custom Resource is applied to the Cloud Development Environment's (CDE) che-gateway container. Additional resources CRW-8221 2.9. Mount proxy environment variables to dashboard container With this release, if there is a proxy configured in the cluster, the proxy configuration is mounted to the che-dashboard container as environment variables: HTTP_PROXY , HTTPS_PROXY , and NO_PROXY . Additional resources CRW-8230 2.10. JetBrains Gateway available as a Technology Preview feature With this release, you can use JetBrains Gateway to connect your local JetBrains IDE (IntelliJ IDEA Ultimate, PyCharm, WebStorm, RubyMine, and CLion) to a running Dev Spaces instance. Important JetBrains Gateway is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Additional resources CRW-8245
[ "apiVersion: v1 kind: ConfigMap metadata: name: vscode-editor-configurations data: extensions.json: | { \"recommendations\": [ \"dbaeumer.vscode-eslint\", \"github.vscode-pull-request-github\" ] } settings.json: | { \"window.header\": \"SOME HEADER MESSAGE\", \"window.commandCenter\": false, \"workbench.colorCustomizations\": { \"titleBar.activeBackground\": \"#CCA700\", \"titleBar.activeForeground\": \"#ffffff\" } } immutable: false" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_dev_spaces/3.19/html/release_notes_and_known_issues/new-features
Chapter 7. Configuring the TechDocs plugin in Red Hat Developer Hub
Chapter 7. Configuring the TechDocs plugin in Red Hat Developer Hub The Red Hat Developer Hub TechDocs plugin helps your organization create, find, and use documentation in a central location and in a standardized way. For example: Docs-like-code approach Write your technical documentation in Markdown files that are stored inside your project repository along with your code. Documentation site generation Use MkDocs to create a full-featured, Markdown-based, static HTML site for your documentation that is rendered centrally in Developer Hub. Documentation site metadata and integrations See additional metadata about the documentation site alongside the static documentation, such as the date of the last update, the site owner, top contributors, open GitHub issues, Slack support channels, and Stack Overflow Enterprise tags. Built-in navigation and search Find the information that you want from a document more quickly and easily. Add-ons Customize your TechDocs experience with Add-ons to address higher-order documentation needs. The TechDocs plugin is preinstalled and enabled on a Developer Hub instance by default. You can disable or enable the TechDocs plugin, and change other parameters, by configuring the Red Hat Developer Hub Helm chart or the Red Hat Developer Hub Operator config map. Important Red Hat Developer Hub includes a built-in TechDocs builder that generates static HTML documentation from your codebase. However, the default basic setup of the local builder is not intended for production. You can use a CI/CD pipeline with the repository that has a dedicated job to generate docs for TechDocs. The generated static files are stored in OpenShift Data Foundation or in a cloud storage solution of your choice and published to a static HTML documentation site. After you configure OpenShift Data Foundation to store the files that TechDocs generates, you can configure the TechDocs plugin to use the OpenShift Data Foundation for cloud storage. Additional resources For more information, see Configuring plugins in Red Hat Developer Hub . 7.1. Configuring storage for TechDocs files The TechDocs publisher stores generated files in local storage or in cloud storage, such as OpenShift Data Foundation, Google GCS, AWS S3, or Azure Blob Storage. 7.1.1. Using OpenShift Data Foundation for file storage You can configure OpenShift Data Foundation to store the files that TechDocs generates instead of relying on other cloud storage solutions. OpenShift Data Foundation provides an ObjectBucketClaim custom resource (CR) that you can use to request an S3 compatible bucket backend. You must install the OpenShift Data Foundation Operator to use this feature. Prerequisites An OpenShift Container Platform administrator has installed the OpenShift Data Foundation Operator in Red Hat OpenShift Container Platform. For more information, see OpenShift Container Platform - Installing Red Hat OpenShift Data Foundation Operator . An OpenShift Container Platform administrator has created an OpenShift Data Foundation cluster and configured the StorageSystem schema. For more information, see OpenShift Container Platform - Creating an OpenShift Data Foundation cluster . Procedure Create an ObjectBucketClaim CR where the generated TechDocs files are stored. For example: apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: <rhdh_bucket_claim_name> spec: generateBucketName: <rhdh_bucket_claim_name> storageClassName: openshift-storage.noobaa.io Note Creating the Developer Hub ObjectBucketClaim CR automatically creates both the Developer Hub ObjectBucketClaim config map and secret. The config map and secret have the same name as the ObjetBucketClaim CR. After you create the ObjectBucketClaim CR, you can use the information stored in the config map and secret to make the information accessible to the Developer Hub container as environment variables. Depending on the method that you used to install Developer Hub, you add the access information to either the Red Hat Developer Hub Helm chart or Operator configuration. Additional resources For more information about the Object Bucket Claim, see OpenShift Container Platform - Object Bucket Claim . 7.1.2. Making object storage accessible to containers by using the Helm chart Creating a ObjectBucketClaim custom resource (CR) automatically generates both the Developer Hub ObjectBucketClaim config map and secret. The config map and secret contain ObjectBucket access information. Adding the access information to the Helm chart configuration makes it accessible to the Developer Hub container by adding the following environment variables to the container: BUCKET_NAME BUCKET_HOST BUCKET_PORT BUCKET_REGION BUCKET_SUBREGION AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY These variables are then used in the TechDocs plugin configuration. Prerequisites You have installed Red Hat Developer Hub on OpenShift Container Platform using the Helm chart. You have created an ObjectBucketClaim CR for storing files generated by TechDocs. For more information see Using OpenShift Data Foundation for file storage Procedure In the upstream.backstage key in the Helm chart values, enter the name of the Developer Hub ObjectBucketClaim secret as the value for the extraEnvVarsSecrets field and the extraEnvVarsCM field. For example: upstream: backstage: extraEnvVarsSecrets: - <rhdh_bucket_claim_name> extraEnvVarsCM: - <rhdh_bucket_claim_name> 7.1.2.1. Example TechDocs Plugin configuration for the Helm chart The following example shows a Developer Hub Helm chart configuration for the TechDocs plugin: global: dynamic: includes: - 'dynamic-plugins.default.yaml' plugins: - disabled: false package: ./dynamic-plugins/dist/backstage-plugin-techdocs-backend-dynamic pluginConfig: techdocs: builder: external generator: runIn: local publisher: awsS3: bucketName: 'USD{BUCKET_NAME}' credentials: accessKeyId: 'USD{AWS_ACCESS_KEY_ID}' secretAccessKey: 'USD{AWS_SECRET_ACCESS_KEY}' endpoint: 'https://USD{BUCKET_HOST}' region: 'USD{BUCKET_REGION}' s3ForcePathStyle: true type: awsS3 7.1.3. Making object storage accessible to containers by using the Operator Creating a ObjectBucketClaim Custom Resource (CR) automatically generates both the Developer Hub ObjectBucketClaim config map and secret. The config map and secret contain ObjectBucket access information. Adding the access information to the Operator configuration makes it accessible to the Developer Hub container by adding the following environment variables to the container: BUCKET_NAME BUCKET_HOST BUCKET_PORT BUCKET_REGION BUCKET_SUBREGION AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY These variables are then used in the TechDocs plugin configuration. Prerequisites You have installed Red Hat Developer Hub on OpenShift Container Platform using the Operator. You have created an ObjectBucketClaim CR for storing files generated by TechDocs. Procedure In the Developer Hub Backstage CR, enter the name of the Developer Hub ObjectBucketClaim config map as the value for the spec.application.extraEnvs.configMaps field and enter the Developer Hub ObjectBucketClaim secret name as the value for the spec.application.extraEnvs.secrets field. For example: apiVersion: objectbucket.io/v1alpha1 kind: Backstage metadata: name: <name> spec: application: extraEnvs: configMaps: - name: <rhdh_bucket_claim_name> secrets: - name: <rhdh_bucket_claim_name> 7.1.3.1. Example TechDocs Plugin configuration for the Operator The following example shows a Red Hat Developer Hub Operator config map configuration for the TechDocs plugin: kind: ConfigMap apiVersion: v1 metadata: name: dynamic-plugins-rhdh data: dynamic-plugins.yaml: | includes: - dynamic-plugins.default.yaml plugins: - disabled: false package: ./dynamic-plugins/dist/backstage-plugin-techdocs-backend-dynamic pluginConfig: techdocs: builder: external generator: runIn: local publisher: awsS3: bucketName: 'USD{BUCKET_NAME}' credentials: accessKeyId: 'USD{AWS_ACCESS_KEY_ID}' secretAccessKey: 'USD{AWS_SECRET_ACCESS_KEY}' endpoint: 'https://USD{BUCKET_HOST}' region: 'USD{BUCKET_REGION}' s3ForcePathStyle: true type: awsS3 7.2. Configuring CI/CD to generate and publish TechDocs sites TechDocs reads the static generated documentation files from a cloud storage bucket, such as OpenShift Data Foundation. The documentation site is generated on the CI/CD workflow associated with the repository containing the documentation files. You can generate docs on CI and publish to a cloud storage using the techdocs-cli CLI tool. You can use the following example to create a script for TechDocs publication: # Prepare REPOSITORY_URL='https://github.com/org/repo' git clone USDREPOSITORY_URL cd repo # Install @techdocs/cli, mkdocs and mkdocs plugins npm install -g @techdocs/cli pip install "mkdocs-techdocs-core==1.*" # Generate techdocs-cli generate --no-docker # Publish techdocs-cli publish --publisher-type awsS3 --storage-name <bucket/container> --entity <Namespace/Kind/Name> The TechDocs workflow starts the CI when a user makes changes in the repository containing the documentation files. You can configure the workflow to start only when files inside the docs/ directory or mkdocs.yml are changed. 7.2.1. Preparing your repository for CI The first step on the CI is to clone your documentation source repository in a working directory. Procedure To clone your documentation source repository in a working directory, enter the following command: git clone <https://path/to/docs-repository/> 7.2.2. Generating the TechDocs site Procedure To configure CI/CD to generate your techdocs, complete the following steps: Install the npx package to run techdocs-cli using the following command: Install the techdocs-cli tool using the following command: Install the mkdocs plugins using the following command: Generate your techdocs site using the following command: npx @techdocs/cli generate --no-docker --source-dir <path_to_repo> --output-dir ./site Where <path_to_repo> is the location in the file path that you used to clone your repository. 7.2.3. Publishing the TechDocs site Procedure To publish your techdocs site, complete the following steps: Set the necessary authentication environment variables for your cloud storage provider. Publish your techdocs using the following command: npx @techdocs/cli publish --publisher-type <awsS3|googleGcs> --storage-name <bucket/container> --entity <namespace/kind/name> --directory ./site Add a .github/workflows/techdocs.yml file in your Software Template(s). For example: name: Publish TechDocs Site on: push: branches: [main] # You can even set it to run only when TechDocs related files are updated. # paths: # - "docs/**" # - "mkdocs.yml" jobs: publish-techdocs-site: runs-on: ubuntu-latest # The following secrets are required in your CI environment for publishing files to AWS S3. # e.g. You can use GitHub Organization secrets to set them for all existing and new repositories. env: TECHDOCS_S3_BUCKET_NAME: USD{{ secrets.TECHDOCS_S3_BUCKET_NAME }} AWS_ACCESS_KEY_ID: USD{{ secrets.AWS_ACCESS_KEY_ID }} AWS_SECRET_ACCESS_KEY: USD{{ secrets.AWS_SECRET_ACCESS_KEY }} AWS_REGION: USD{{ secrets.AWS_REGION }} ENTITY_NAMESPACE: 'default' ENTITY_KIND: 'Component' ENTITY_NAME: 'my-doc-entity' # In a Software template, Scaffolder will replace {{cookiecutter.component_id | jsonify}} # with the correct entity name. This is same as metadata.name in the entity's catalog-info.yaml # ENTITY_NAME: '{{ cookiecutter.component_id | jsonify }}' steps: - name: Checkout code uses: actions/checkout@v3 - uses: actions/setup-node@v3 - uses: actions/setup-python@v4 with: python-version: '3.9' - name: Install techdocs-cli run: sudo npm install -g @techdocs/cli - name: Install mkdocs and mkdocs plugins run: python -m pip install mkdocs-techdocs-core==1.* - name: Generate docs site run: techdocs-cli generate --no-docker --verbose - name: Publish docs site run: techdocs-cli publish --publisher-type awsS3 --storage-name USDTECHDOCS_S3_BUCKET_NAME --entity USDENTITY_NAMESPACE/USDENTITY_KIND/USDENTITY_NAME
[ "apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: <rhdh_bucket_claim_name> spec: generateBucketName: <rhdh_bucket_claim_name> storageClassName: openshift-storage.noobaa.io", "upstream: backstage: extraEnvVarsSecrets: - <rhdh_bucket_claim_name> extraEnvVarsCM: - <rhdh_bucket_claim_name>", "global: dynamic: includes: - 'dynamic-plugins.default.yaml' plugins: - disabled: false package: ./dynamic-plugins/dist/backstage-plugin-techdocs-backend-dynamic pluginConfig: techdocs: builder: external generator: runIn: local publisher: awsS3: bucketName: 'USD{BUCKET_NAME}' credentials: accessKeyId: 'USD{AWS_ACCESS_KEY_ID}' secretAccessKey: 'USD{AWS_SECRET_ACCESS_KEY}' endpoint: 'https://USD{BUCKET_HOST}' region: 'USD{BUCKET_REGION}' s3ForcePathStyle: true type: awsS3", "apiVersion: objectbucket.io/v1alpha1 kind: Backstage metadata: name: <name> spec: application: extraEnvs: configMaps: - name: <rhdh_bucket_claim_name> secrets: - name: <rhdh_bucket_claim_name>", "kind: ConfigMap apiVersion: v1 metadata: name: dynamic-plugins-rhdh data: dynamic-plugins.yaml: | includes: - dynamic-plugins.default.yaml plugins: - disabled: false package: ./dynamic-plugins/dist/backstage-plugin-techdocs-backend-dynamic pluginConfig: techdocs: builder: external generator: runIn: local publisher: awsS3: bucketName: 'USD{BUCKET_NAME}' credentials: accessKeyId: 'USD{AWS_ACCESS_KEY_ID}' secretAccessKey: 'USD{AWS_SECRET_ACCESS_KEY}' endpoint: 'https://USD{BUCKET_HOST}' region: 'USD{BUCKET_REGION}' s3ForcePathStyle: true type: awsS3", "Prepare REPOSITORY_URL='https://github.com/org/repo' git clone USDREPOSITORY_URL cd repo Install @techdocs/cli, mkdocs and mkdocs plugins npm install -g @techdocs/cli pip install \"mkdocs-techdocs-core==1.*\" Generate techdocs-cli generate --no-docker Publish techdocs-cli publish --publisher-type awsS3 --storage-name <bucket/container> --entity <Namespace/Kind/Name>", "git clone <https://path/to/docs-repository/>", "npm install -g npx", "npm install -g @techdocs/cli", "pip install \"mkdocs-techdocs-core==1.*\"", "npx @techdocs/cli generate --no-docker --source-dir <path_to_repo> --output-dir ./site", "npx @techdocs/cli publish --publisher-type <awsS3|googleGcs> --storage-name <bucket/container> --entity <namespace/kind/name> --directory ./site", "name: Publish TechDocs Site on: push: branches: [main] # You can even set it to run only when TechDocs related files are updated. # paths: # - \"docs/**\" # - \"mkdocs.yml\" jobs: publish-techdocs-site: runs-on: ubuntu-latest # The following secrets are required in your CI environment for publishing files to AWS S3. # e.g. You can use GitHub Organization secrets to set them for all existing and new repositories. env: TECHDOCS_S3_BUCKET_NAME: USD{{ secrets.TECHDOCS_S3_BUCKET_NAME }} AWS_ACCESS_KEY_ID: USD{{ secrets.AWS_ACCESS_KEY_ID }} AWS_SECRET_ACCESS_KEY: USD{{ secrets.AWS_SECRET_ACCESS_KEY }} AWS_REGION: USD{{ secrets.AWS_REGION }} ENTITY_NAMESPACE: 'default' ENTITY_KIND: 'Component' ENTITY_NAME: 'my-doc-entity' # In a Software template, Scaffolder will replace {{cookiecutter.component_id | jsonify}} # with the correct entity name. This is same as metadata.name in the entity's catalog-info.yaml # ENTITY_NAME: '{{ cookiecutter.component_id | jsonify }}' steps: - name: Checkout code uses: actions/checkout@v3 - uses: actions/setup-node@v3 - uses: actions/setup-python@v4 with: python-version: '3.9' - name: Install techdocs-cli run: sudo npm install -g @techdocs/cli - name: Install mkdocs and mkdocs plugins run: python -m pip install mkdocs-techdocs-core==1.* - name: Generate docs site run: techdocs-cli generate --no-docker --verbose - name: Publish docs site run: techdocs-cli publish --publisher-type awsS3 --storage-name USDTECHDOCS_S3_BUCKET_NAME --entity USDENTITY_NAMESPACE/USDENTITY_KIND/USDENTITY_NAME" ]
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.3/html/administration_guide_for_red_hat_developer_hub/assembly-techdocs-plugin_assembly-admin-templates
Chapter 4. Installing a cluster on Nutanix in a restricted network
Chapter 4. Installing a cluster on Nutanix in a restricted network In OpenShift Container Platform 4.16, you can install a cluster on Nutanix infrastructure in a restricted network by creating an internal mirror of the installation release content. 4.1. Prerequisites You have reviewed details about the OpenShift Container Platform installation and update processes. The installation program requires access to port 9440 on Prism Central and Prism Element. You verified that port 9440 is accessible. If you use a firewall, you have met these prerequisites: You confirmed that port 9440 is accessible. Control plane nodes must be able to reach Prism Central and Prism Element on port 9440 for the installation to succeed. You configured the firewall to grant access to the sites that OpenShift Container Platform requires. This includes the use of Telemetry. If your Nutanix environment is using the default self-signed SSL/TLS certificate, replace it with a certificate that is signed by a CA. The installation program requires a valid CA-signed certificate to access to the Prism Central API. For more information about replacing the self-signed certificate, see the Nutanix AOS Security Guide . If your Nutanix environment uses an internal CA to issue certificates, you must configure a cluster-wide proxy as part of the installation process. For more information, see Configuring a custom PKI . Important Use 2048-bit certificates. The installation fails if you use 4096-bit certificates with Prism Central 2022.x. You have a container image registry, such as Red Hat Quay. If you do not already have a registry, you can create a mirror registry using mirror registry for Red Hat OpenShift . You have used the oc-mirror OpenShift CLI (oc) plugin to mirror all of the required OpenShift Container Platform content and other images, including the Nutanix CSI Operator, to your mirror registry. Important Because the installation media is on the mirror host, you can use that computer to complete all installation steps. 4.2. About installations in restricted networks In OpenShift Container Platform 4.16, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware, Nutanix, or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. 4.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 4.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 4.4. Adding Nutanix root CA certificates to your system trust Because the installation program requires access to the Prism Central API, you must add your Nutanix trusted root CA certificates to your system trust before you install an OpenShift Container Platform cluster. Procedure From the Prism Central web console, download the Nutanix root CA certificates. Extract the compressed file that contains the Nutanix root CA certificates. Add the files for your operating system to the system trust. For example, on a Fedora operating system, run the following command: # cp certs/lin/* /etc/pki/ca-trust/source/anchors Update your system trust. For example, on a Fedora operating system, run the following command: # update-ca-trust extract 4.5. Downloading the RHCOS cluster image Prism Central requires access to the Red Hat Enterprise Linux CoreOS (RHCOS) image to install the cluster. You can use the installation program to locate and download the RHCOS image and make it available through an internal HTTP server or Nutanix Objects. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host. Procedure Change to the directory that contains the installation program and run the following command: USD ./openshift-install coreos print-stream-json Use the output of the command to find the location of the Nutanix image, and click the link to download it. Example output "nutanix": { "release": "411.86.202210041459-0", "formats": { "qcow2": { "disk": { "location": "https://rhcos.mirror.openshift.com/art/storage/releases/rhcos-4.11/411.86.202210041459-0/x86_64/rhcos-411.86.202210041459-0-nutanix.x86_64.qcow2", "sha256": "42e227cac6f11ac37ee8a2f9528bb3665146566890577fd55f9b950949e5a54b" Make the image available through an internal HTTP server or Nutanix Objects. Note the location of the downloaded image. You update the platform section in the installation configuration file ( install-config.yaml ) with the image's location before deploying the cluster. Snippet of an install-config.yaml file that specifies the RHCOS image platform: nutanix: clusterOSImage: http://example.com/images/rhcos-411.86.202210041459-0-nutanix.x86_64.qcow2 4.6. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Nutanix. Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host. You have the imageContentSourcePolicy.yaml file that was created when you mirrored your registry. You have the location of the Red Hat Enterprise Linux CoreOS (RHCOS) image you download. You have obtained the contents of the certificate for your mirror registry. You have retrieved a Red Hat Enterprise Linux CoreOS (RHCOS) image and uploaded it to an accessible location. You have verified that you have met the Nutanix networking requirements. For more information, see "Preparing to install on Nutanix". Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select nutanix as the platform to target. Enter the Prism Central domain name or IP address. Enter the port that is used to log into Prism Central. Enter the credentials that are used to log into Prism Central. The installation program connects to Prism Central. Select the Prism Element that will manage the OpenShift Container Platform cluster. Select the network subnet to use. Enter the virtual IP address that you configured for control plane API access. Enter the virtual IP address that you configured for cluster ingress. Enter the base domain. This base domain must be the same one that you configured in the DNS records. Enter a descriptive name for your cluster. The cluster name you enter must match the cluster name you specified when configuring the DNS records. In the install-config.yaml file, set the value of platform.nutanix.clusterOSImage to the image location or name. For example: platform: nutanix: clusterOSImage: http://mirror.example.com/images/rhcos-47.83.202103221318-0-nutanix.x86_64.qcow2 Edit the install-config.yaml file to give the additional information that is required for an installation in a restricted network. Update the pullSecret value to contain the authentication information for your registry: pullSecret: '{"auths":{"<mirror_host_name>:5000": {"auth": "<credentials>","email": "[email protected]"}}}' For <mirror_host_name> , specify the registry domain name that you specified in the certificate for your mirror registry, and for <credentials> , specify the base64-encoded user name and password for your mirror registry. Add the additionalTrustBundle parameter and value. additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority, or the self-signed certificate that you generated for the mirror registry. Add the image content resources, which resemble the following YAML excerpt: imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release For these values, use the imageContentSourcePolicy.yaml file that was created when you mirrored the registry. Optional: Set the publishing strategy to Internal : publish: Internal By setting this option, you create an internal Ingress Controller and a private load balancer. Optional: Update one or more of the default configuration parameters in the install.config.yaml file to customize the installation. For more information about the parameters, see "Installation configuration parameters". Note If you are installing a three-node cluster, be sure to set the compute.replicas parameter to 0 . This ensures that cluster's control planes are schedulable. For more information, see "Installing a three-node cluster on {platform}". Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for Nutanix 4.6.1. Sample customized install-config.yaml file for Nutanix You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 3 platform: nutanix: 4 cpus: 2 coresPerSocket: 2 memoryMiB: 8196 osDisk: diskSizeGiB: 120 categories: 5 - key: <category_key_name> value: <category_value> controlPlane: 6 hyperthreading: Enabled 7 name: master replicas: 3 platform: nutanix: 8 cpus: 4 coresPerSocket: 2 memoryMiB: 16384 osDisk: diskSizeGiB: 120 categories: 9 - key: <category_key_name> value: <category_value> metadata: creationTimestamp: null name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: nutanix: apiVIP: 10.40.142.7 12 ingressVIP: 10.40.142.8 13 defaultMachinePlatform: bootType: Legacy categories: 14 - key: <category_key_name> value: <category_value> project: 15 type: name name: <project_name> prismCentral: endpoint: address: your.prismcentral.domainname 16 port: 9440 17 password: <password> 18 username: <username> 19 prismElements: - endpoint: address: your.prismelement.domainname port: 9440 uuid: 0005b0f1-8f43-a0f2-02b7-3cecef193712 subnetUUIDs: - c7938dc6-7659-453e-a688-e26020c68e43 clusterOSImage: http://example.com/images/rhcos-47.83.202103221318-0-nutanix.x86_64.qcow2 20 credentialsMode: Manual publish: External pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "[email protected]"}}}' 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23 additionalTrustBundle: | 24 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 25 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 1 10 12 13 16 17 18 19 Required. The installation program prompts you for this value. 2 6 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Although both sections currently define a single machine pool, it is possible that future versions of OpenShift Container Platform will support defining multiple compute pools during installation. Only one control plane pool is used. 3 7 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. 4 8 Optional: Provide additional configuration for the machine pool parameters for the compute and control plane machines. 5 9 14 Optional: Provide one or more pairs of a prism category key and a prism category value. These category key-value pairs must exist in Prism Central. You can provide separate categories to compute machines, control plane machines, or all machines. 11 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 15 Optional: Specify a project with which VMs are associated. Specify either name or uuid for the project type, and then provide the corresponding UUID or project name. You can associate projects to compute machines, control plane machines, or all machines. 20 Optional: By default, the installation program downloads and installs the Red Hat Enterprise Linux CoreOS (RHCOS) image. If Prism Central does not have internet access, you can override the default behavior by hosting the RHCOS image on any HTTP server or Nutanix Objects and pointing the installation program to the image. 21 For <local_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example registry.example.com or registry.example.com:5000 . For <credentials> , specify the base64-encoded user name and password for your mirror registry. 22 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 23 Optional: You can provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 24 Provide the contents of the certificate file that you used for your mirror registry. 25 Provide these values from the metadata.name: release-0 section of the imageContentSourcePolicy.yaml file that was created when you mirrored the registry. 4.6.2. Configuring failure domains Failure domains improve the fault tolerance of an OpenShift Container Platform cluster by distributing control plane and compute machines across multiple Nutanix Prism Elements (clusters). Tip It is recommended that you configure three failure domains to ensure high-availability. Prerequisites You have an installation configuration file ( install-config.yaml ). Procedure Edit the install-config.yaml file and add the following stanza to configure the first failure domain: apiVersion: v1 baseDomain: example.com compute: # ... platform: nutanix: failureDomains: - name: <failure_domain_name> prismElement: name: <prism_element_name> uuid: <prism_element_uuid> subnetUUIDs: - <network_uuid> # ... where: <failure_domain_name> Specifies a unique name for the failure domain. The name is limited to 64 or fewer characters, which can include lower-case letters, digits, and a dash ( - ). The dash cannot be in the leading or ending position of the name. <prism_element_name> Optional. Specifies the name of the Prism Element. <prism_element_uuid > Specifies the UUID of the Prism Element. <network_uuid > Specifies the UUID of the Prism Element subnet object. The subnet's IP address prefix (CIDR) should contain the virtual IP addresses that the OpenShift Container Platform cluster uses. Only one subnet per failure domain (Prism Element) in an OpenShift Container Platform cluster is supported. As required, configure additional failure domains. To distribute control plane and compute machines across the failure domains, do one of the following: If compute and control plane machines can share the same set of failure domains, add the failure domain names under the cluster's default machine configuration. Example of control plane and compute machines sharing a set of failure domains apiVersion: v1 baseDomain: example.com compute: # ... platform: nutanix: defaultMachinePlatform: failureDomains: - failure-domain-1 - failure-domain-2 - failure-domain-3 # ... If compute and control plane machines must use different failure domains, add the failure domain names under the respective machine pools. Example of control plane and compute machines using different failure domains apiVersion: v1 baseDomain: example.com compute: # ... controlPlane: platform: nutanix: failureDomains: - failure-domain-1 - failure-domain-2 - failure-domain-3 # ... compute: platform: nutanix: failureDomains: - failure-domain-1 - failure-domain-2 # ... Save the file. 4.6.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 4.7. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.16. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.16 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 4.8. Configuring IAM for Nutanix Installing the cluster requires that the Cloud Credential Operator (CCO) operate in manual mode. While the installation program configures the CCO for manual mode, you must specify the identity and access management secrets. Prerequisites You have configured the ccoctl binary. You have an install-config.yaml file. Procedure Create a YAML file that contains the credentials data in the following format: Credentials data format credentials: - type: basic_auth 1 data: prismCentral: 2 username: <username_for_prism_central> password: <password_for_prism_central> prismElements: 3 - name: <name_of_prism_element> username: <username_for_prism_element> password: <password_for_prism_element> 1 Specify the authentication type. Only basic authentication is supported. 2 Specify the Prism Central credentials. 3 Optional: Specify the Prism Element credentials. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: annotations: include.release.openshift.io/self-managed-high-availability: "true" labels: controller-tools.k8s.io: "1.0" name: openshift-machine-api-nutanix namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: NutanixProviderSpec secretRef: name: nutanix-credentials namespace: openshift-machine-api Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl nutanix create-shared-secrets \ --credentials-requests-dir=<path_to_credentials_requests_directory> \ 1 --output-dir=<ccoctl_output_dir> \ 2 --credentials-source-filepath=<path_to_credentials_file> 3 1 Specify the path to the directory that contains the files for the component CredentialsRequests objects. 2 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 3 Optional: Specify the directory that contains the credentials data YAML file. By default, ccoctl expects this file to be in <home_directory>/.nutanix/credentials . Edit the install-config.yaml configuration file so that the credentialsMode parameter is set to Manual . Example install-config.yaml configuration file apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 ... 1 Add this line to set the credentialsMode parameter to Manual . Create the installation manifests by running the following command: USD openshift-install create manifests --dir <installation_directory> 1 1 Specify the path to the directory that contains the install-config.yaml file for your cluster. Copy the generated credential files to the target manifests directory by running the following command: USD cp <ccoctl_output_dir>/manifests/*credentials.yaml ./<installation_directory>/manifests Verification Ensure that the appropriate secrets exist in the manifests directory. USD ls ./<installation_directory>/manifests Example output cluster-config.yaml cluster-dns-02-config.yml cluster-infrastructure-02-config.yml cluster-ingress-02-config.yml cluster-network-01-crd.yml cluster-network-02-config.yml cluster-proxy-01-config.yaml cluster-scheduler-02-config.yml cvo-overrides.yaml kube-cloud-config.yaml kube-system-configmap-root-ca.yaml machine-config-server-tls-secret.yaml openshift-config-secret-pull-secret.yaml openshift-cloud-controller-manager-nutanix-credentials-credentials.yaml openshift-machine-api-nutanix-credentials-credentials.yaml 4.9. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 4.10. Post installation Complete the following steps to complete the configuration of your cluster. 4.10.1. Disabling the default OperatorHub catalog sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 4.10.2. Installing the policy resources into the cluster Mirroring the OpenShift Container Platform content using the oc-mirror OpenShift CLI (oc) plugin creates resources, which include catalogSource-certified-operator-index.yaml and imageContentSourcePolicy.yaml . The ImageContentSourcePolicy resource associates the mirror registry with the source registry and redirects image pull requests from the online registries to the mirror registry. The CatalogSource resource is used by Operator Lifecycle Manager (OLM) to retrieve information about the available Operators in the mirror registry, which lets users discover and install Operators. After you install the cluster, you must install these resources into the cluster. Prerequisites You have mirrored the image set to the registry mirror in the disconnected environment. You have access to the cluster as a user with the cluster-admin role. Procedure Log in to the OpenShift CLI as a user with the cluster-admin role. Apply the YAML files from the results directory to the cluster: USD oc apply -f ./oc-mirror-workspace/results-<id>/ Verification Verify that the ImageContentSourcePolicy resources were successfully installed: USD oc get imagecontentsourcepolicy Verify that the CatalogSource resources were successfully installed: USD oc get catalogsource --all-namespaces 4.10.3. Configuring the default storage container After you install the cluster, you must install the Nutanix CSI Operator and configure the default storage container for the cluster. For more information, see the Nutanix documentation for installing the CSI Operator and configuring registry storage . 4.11. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.16, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. 4.12. Additional resources About remote health monitoring 4.13. steps If necessary, see Opt out of remote health reporting If necessary, see Registering your disconnected cluster Customize your cluster
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "cp certs/lin/* /etc/pki/ca-trust/source/anchors", "update-ca-trust extract", "./openshift-install coreos print-stream-json", "\"nutanix\": { \"release\": \"411.86.202210041459-0\", \"formats\": { \"qcow2\": { \"disk\": { \"location\": \"https://rhcos.mirror.openshift.com/art/storage/releases/rhcos-4.11/411.86.202210041459-0/x86_64/rhcos-411.86.202210041459-0-nutanix.x86_64.qcow2\", \"sha256\": \"42e227cac6f11ac37ee8a2f9528bb3665146566890577fd55f9b950949e5a54b\"", "platform: nutanix: clusterOSImage: http://example.com/images/rhcos-411.86.202210041459-0-nutanix.x86_64.qcow2", "./openshift-install create install-config --dir <installation_directory> 1", "platform: nutanix: clusterOSImage: http://mirror.example.com/images/rhcos-47.83.202103221318-0-nutanix.x86_64.qcow2", "pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'", "additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----", "imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release", "publish: Internal", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 3 platform: nutanix: 4 cpus: 2 coresPerSocket: 2 memoryMiB: 8196 osDisk: diskSizeGiB: 120 categories: 5 - key: <category_key_name> value: <category_value> controlPlane: 6 hyperthreading: Enabled 7 name: master replicas: 3 platform: nutanix: 8 cpus: 4 coresPerSocket: 2 memoryMiB: 16384 osDisk: diskSizeGiB: 120 categories: 9 - key: <category_key_name> value: <category_value> metadata: creationTimestamp: null name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: nutanix: apiVIP: 10.40.142.7 12 ingressVIP: 10.40.142.8 13 defaultMachinePlatform: bootType: Legacy categories: 14 - key: <category_key_name> value: <category_value> project: 15 type: name name: <project_name> prismCentral: endpoint: address: your.prismcentral.domainname 16 port: 9440 17 password: <password> 18 username: <username> 19 prismElements: - endpoint: address: your.prismelement.domainname port: 9440 uuid: 0005b0f1-8f43-a0f2-02b7-3cecef193712 subnetUUIDs: - c7938dc6-7659-453e-a688-e26020c68e43 clusterOSImage: http://example.com/images/rhcos-47.83.202103221318-0-nutanix.x86_64.qcow2 20 credentialsMode: Manual publish: External pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23 additionalTrustBundle: | 24 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 25 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "apiVersion: v1 baseDomain: example.com compute: platform: nutanix: failureDomains: - name: <failure_domain_name> prismElement: name: <prism_element_name> uuid: <prism_element_uuid> subnetUUIDs: - <network_uuid>", "apiVersion: v1 baseDomain: example.com compute: platform: nutanix: defaultMachinePlatform: failureDomains: - failure-domain-1 - failure-domain-2 - failure-domain-3", "apiVersion: v1 baseDomain: example.com compute: controlPlane: platform: nutanix: failureDomains: - failure-domain-1 - failure-domain-2 - failure-domain-3 compute: platform: nutanix: failureDomains: - failure-domain-1 - failure-domain-2", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "credentials: - type: basic_auth 1 data: prismCentral: 2 username: <username_for_prism_central> password: <password_for_prism_central> prismElements: 3 - name: <name_of_prism_element> username: <username_for_prism_element> password: <password_for_prism_element>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: annotations: include.release.openshift.io/self-managed-high-availability: \"true\" labels: controller-tools.k8s.io: \"1.0\" name: openshift-machine-api-nutanix namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: NutanixProviderSpec secretRef: name: nutanix-credentials namespace: openshift-machine-api", "ccoctl nutanix create-shared-secrets --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 1 --output-dir=<ccoctl_output_dir> \\ 2 --credentials-source-filepath=<path_to_credentials_file> 3", "apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1", "openshift-install create manifests --dir <installation_directory> 1", "cp <ccoctl_output_dir>/manifests/*credentials.yaml ./<installation_directory>/manifests", "ls ./<installation_directory>/manifests", "cluster-config.yaml cluster-dns-02-config.yml cluster-infrastructure-02-config.yml cluster-ingress-02-config.yml cluster-network-01-crd.yml cluster-network-02-config.yml cluster-proxy-01-config.yaml cluster-scheduler-02-config.yml cvo-overrides.yaml kube-cloud-config.yaml kube-system-configmap-root-ca.yaml machine-config-server-tls-secret.yaml openshift-config-secret-pull-secret.yaml openshift-cloud-controller-manager-nutanix-credentials-credentials.yaml openshift-machine-api-nutanix-credentials-credentials.yaml", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'", "oc apply -f ./oc-mirror-workspace/results-<id>/", "oc get imagecontentsourcepolicy", "oc get catalogsource --all-namespaces" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installing_on_nutanix/installing-restricted-networks-nutanix-installer-provisioned
5. Feedback
5. Feedback If you spot a typo, or if you have thought of a way to make this manual better, we would love to hear from you. Please submit a report in Bugzilla ( http://bugzilla.redhat.com/bugzilla/ ) against the component rh-cs . Be sure to mention the manual's identifier: By mentioning this manual's identifier, we know exactly which version of the guide you have. If you have a suggestion for improving the documentation, try to be as specific as possible. If you have found an error, please include the section number and some of the surrounding text so we can find it easily.
[ "rh-gfs(EN)-4.8 (2009-05-15T15:10)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/global_network_block_device/s1-intro-feedback-GFS
Chapter 10. HelmChartRepository [helm.openshift.io/v1beta1]
Chapter 10. HelmChartRepository [helm.openshift.io/v1beta1] Description HelmChartRepository holds cluster-wide configuration for proxied Helm chart repository Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object Required spec 10.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec holds user settable values for configuration status object Observed status of the repository within the cluster.. 10.1.1. .spec Description spec holds user settable values for configuration Type object Property Type Description connectionConfig object Required configuration for connecting to the chart repo description string Optional human readable repository description, it can be used by UI for displaying purposes disabled boolean If set to true, disable the repo usage in the cluster/namespace name string Optional associated human readable repository name, it can be used by UI for displaying purposes 10.1.2. .spec.connectionConfig Description Required configuration for connecting to the chart repo Type object Property Type Description ca object ca is an optional reference to a config map by name containing the PEM-encoded CA bundle. It is used as a trust anchor to validate the TLS certificate presented by the remote server. The key "ca-bundle.crt" is used to locate the data. If empty, the default system roots are used. The namespace for this config map is openshift-config. tlsClientConfig object tlsClientConfig is an optional reference to a secret by name that contains the PEM-encoded TLS client certificate and private key to present when connecting to the server. The key "tls.crt" is used to locate the client certificate. The key "tls.key" is used to locate the private key. The namespace for this secret is openshift-config. url string Chart repository URL 10.1.3. .spec.connectionConfig.ca Description ca is an optional reference to a config map by name containing the PEM-encoded CA bundle. It is used as a trust anchor to validate the TLS certificate presented by the remote server. The key "ca-bundle.crt" is used to locate the data. If empty, the default system roots are used. The namespace for this config map is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced config map 10.1.4. .spec.connectionConfig.tlsClientConfig Description tlsClientConfig is an optional reference to a secret by name that contains the PEM-encoded TLS client certificate and private key to present when connecting to the server. The key "tls.crt" is used to locate the client certificate. The key "tls.key" is used to locate the private key. The namespace for this secret is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced secret 10.1.5. .status Description Observed status of the repository within the cluster.. Type object Property Type Description conditions array conditions is a list of conditions and their statuses conditions[] object Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } 10.1.6. .status.conditions Description conditions is a list of conditions and their statuses Type array 10.1.7. .status.conditions[] Description Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } Type object Required lastTransitionTime message reason status type Property Type Description lastTransitionTime string lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string message is a human readable message indicating details about the transition. This may be an empty string. observedGeneration integer observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. --- Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deconflict is important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt) 10.2. API endpoints The following API endpoints are available: /apis/helm.openshift.io/v1beta1/helmchartrepositories DELETE : delete collection of HelmChartRepository GET : list objects of kind HelmChartRepository POST : create a HelmChartRepository /apis/helm.openshift.io/v1beta1/helmchartrepositories/{name} DELETE : delete a HelmChartRepository GET : read the specified HelmChartRepository PATCH : partially update the specified HelmChartRepository PUT : replace the specified HelmChartRepository /apis/helm.openshift.io/v1beta1/helmchartrepositories/{name}/status GET : read status of the specified HelmChartRepository PATCH : partially update status of the specified HelmChartRepository PUT : replace status of the specified HelmChartRepository 10.2.1. /apis/helm.openshift.io/v1beta1/helmchartrepositories HTTP method DELETE Description delete collection of HelmChartRepository Table 10.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind HelmChartRepository Table 10.2. HTTP responses HTTP code Reponse body 200 - OK HelmChartRepositoryList schema 401 - Unauthorized Empty HTTP method POST Description create a HelmChartRepository Table 10.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.4. Body parameters Parameter Type Description body HelmChartRepository schema Table 10.5. HTTP responses HTTP code Reponse body 200 - OK HelmChartRepository schema 201 - Created HelmChartRepository schema 202 - Accepted HelmChartRepository schema 401 - Unauthorized Empty 10.2.2. /apis/helm.openshift.io/v1beta1/helmchartrepositories/{name} Table 10.6. Global path parameters Parameter Type Description name string name of the HelmChartRepository HTTP method DELETE Description delete a HelmChartRepository Table 10.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 10.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified HelmChartRepository Table 10.9. HTTP responses HTTP code Reponse body 200 - OK HelmChartRepository schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified HelmChartRepository Table 10.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.11. HTTP responses HTTP code Reponse body 200 - OK HelmChartRepository schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified HelmChartRepository Table 10.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.13. Body parameters Parameter Type Description body HelmChartRepository schema Table 10.14. HTTP responses HTTP code Reponse body 200 - OK HelmChartRepository schema 201 - Created HelmChartRepository schema 401 - Unauthorized Empty 10.2.3. /apis/helm.openshift.io/v1beta1/helmchartrepositories/{name}/status Table 10.15. Global path parameters Parameter Type Description name string name of the HelmChartRepository HTTP method GET Description read status of the specified HelmChartRepository Table 10.16. HTTP responses HTTP code Reponse body 200 - OK HelmChartRepository schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified HelmChartRepository Table 10.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.18. HTTP responses HTTP code Reponse body 200 - OK HelmChartRepository schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified HelmChartRepository Table 10.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.20. Body parameters Parameter Type Description body HelmChartRepository schema Table 10.21. HTTP responses HTTP code Reponse body 200 - OK HelmChartRepository schema 201 - Created HelmChartRepository schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/config_apis/helmchartrepository-helm-openshift-io-v1beta1
Chapter 1. Overview of OpenShift AI
Chapter 1. Overview of OpenShift AI Red Hat OpenShift AI is a platform for data scientists and developers of artificial intelligence and machine learning (AI/ML) applications. OpenShift AI provides an environment to develop, train, serve, test, and monitor AI/ML models and applications on-premise or in the cloud. For data scientists, OpenShift AI includes Jupyter and a collection of default notebook images optimized with the tools and libraries required for model development, and the TensorFlow and PyTorch frameworks. Deploy and host your models, integrate models into external applications, and export models to host them in any hybrid cloud environment. You can enhance your data science projects on OpenShift AI by building portable machine learning (ML) workflows with data science pipelines, using Docker containers. You can also accelerate your data science experiments through the use of graphics processing units (GPUs) and Intel Gaudi AI accelerators. For administrators, OpenShift AI enables data science workloads in an existing Red Hat OpenShift or ROSA environment. Manage users with your existing OpenShift identity provider, and manage the resources available to notebook servers to ensure data scientists have what they require to create, train, and host models. Use accelerators to reduce costs and allow your data scientists to enhance the performance of their end-to-end data science workflows using graphics processing units (GPUs) and Intel Gaudi AI accelerators. OpenShift AI has two deployment options: Self-managed software that you can install on-premise or in the cloud. You can install OpenShift AI Self-Managed in a self-managed environment such as OpenShift Container Platform, or in Red Hat-managed cloud environments such as Red Hat OpenShift Dedicated (with a Customer Cloud Subscription for AWS or GCP), Red Hat OpenShift Service on Amazon Web Services (ROSA Classic or ROSA HCP), or Microsoft Azure Red Hat OpenShift. A managed cloud service , installed as an add-on in Red Hat OpenShift Dedicated (with a Customer Cloud Subscription for AWS or GCP) or in Red Hat OpenShift Service on Amazon Web Services (ROSA Classic). For information about OpenShift AI Cloud Service, see Product Documentation for Red Hat OpenShift AI . For information about OpenShift AI supported software platforms, components, and dependencies, see the Red Hat OpenShift AI: Supported Configurations Knowledgebase article. For a detailed view of the 2.18 release lifecycle, including the full support phase window, see the Red Hat OpenShift AI Self-Managed Life Cycle Knowledgebase article.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2.18/html/release_notes/overview-of-openshift-ai_relnotes
32.3.7. Exiting the Utility
32.3.7. Exiting the Utility To exit the interactive prompt and terminate crash , type exit or q . Example 32.8. Exiting the crash utility
[ "crash> exit ~]#" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2-kdump-crash-exit
C.5. Selection Criteria Display Examples
C.5. Selection Criteria Display Examples This section provides a series of examples showing how to use selection criteria for LVM display commands. The examples in this section use a system configured with LVM volumes that yield the following output when selection criteria are not used. The following command displays all logical volumes with "lvol[13]" in their name, using a regular expression to specify this. The following command displays all logical volumes greater than 500 megabytes in size. The following command displays all logical volumes that include thin as a logical volume role, indicating that the logical volume is used in constructing a thin pool. This example uses braces ({}) to indicate a subset in the display. The following command displays all usable top-level logical volumes, which are the logical volumes with a role of "public". If you do not specify braces ({}) in a string list to indicate a subset, it is assumed by default; specifying lv_role=public is equivalent to specifying lv_role={public} . The following command displays all logical volumes with a thin layout. The following command displays all logical volumes with a layout field that matches "sparse,thin" exactly. Note that it is not necessary to specify the string list members for the match to be positive. The following command displays the logical volume names of the logical volumes that are thin, sparse logical volumes. Note that the list of fields used for selection criteria do not need to be the same as the list of fields to display.
[ "lvs -a -o+layout,role LV VG Attr LSize Pool Origin Data% Meta% Layout Role root f1 -wi-ao---- 9.01g linear public swap f1 -wi-ao---- 512.00m linear public [lvol0_pmspare] vg ewi------- 4.00m linear private, pool,spare lvol1 vg Vwi-a-tz-- 1.00g pool 0.00 thin,sparse public lvol2 vg Vwi-a-tz-- 1.00g pool 0.00 thin,sparse public, origin, thinorigin lvol3 vg Vwi---tz-k 1.00g pool lvol2 thin,sparse public, snapshot, thinsnapshot pool vg twi-aotz-- 100.00m 0.00 1.07 thin,pool private [pool_tdata] vg Twi-ao---- 100.00m linear private, thin,pool, data [pool_tmeta] vg ewi-ao---- 4.00m linear private, thin,pool, metadata", "lvs -a -o+layout,role -S 'lv_name=~lvol[13]' LV VG Attr LSize Pool Origin Data% Layout Role lvol1 vg Vwi-a-tz-- 1.00g pool 0.00 thin,sparse public lvol3 vg Vwi---tz-k 1.00g pool lvol2 thin,sparse public,snapshot,thinsnapshot", "lvs -a -o+layout,role -S 'lv_size>500m' LV VG Attr LSize Pool Origin Data% Layout Role root f1 -wi-ao---- 9.01g linear public swap f1 -wi-ao---- 512.00m linear public lvol1 vg Vwi-a-tz-- 1.00g pool 0.00 thin,sparse public lvol2 vg Vwi-a-tz-- 1.00g pool 0.00 thin,sparse public,origin,thinorigin lvol3 vg Vwi---tz-k 1.00g pool lvol2 thin,sparse public,snapshot, thinsnapshot", "lvs -a -o+layout,role -S 'lv_role={thin}' LV VG Attr LSize Layout Role [pool_tdata] vg Twi-ao---- 100.00m linear private,thin,pool,data [pool_tmeta] vg ewi-ao---- 4.00m linear private,thin,pool,metadata", "lvs -a -o+layout,role -S 'lv_role=public' LV VG Attr LSize Pool Origin Data% Layout Role root f1 -wi-ao---- 9.01g linear public swap f1 -wi-ao---- 512.00m linear public lvol1 vg Vwi-a-tz-- 1.00g pool 0.00 thin,sparse public lvol2 vg Vwi-a-tz-- 1.00g pool 0.00 thin,sparse public,origin,thinorigin lvol3 vg Vwi---tz-k 1.00g pool lvol2 thin,sparse public,snapshot,thinsnapshot", "lvs -a -o+layout,role -S 'lv_layout={thin}' LV VG Attr LSize Pool Origin Data% Meta% Layout Role lvol1 vg Vwi-a-tz-- 1.00g pool 0.00 thin,sparse public lvol2 vg Vwi-a-tz-- 1.00g pool 0.00 thin,sparse public,origin, thinorigin lvol3 vg Vwi---tz-k 1.00g pool lvol2 thin,sparse public,snapshot, thinsnapshot pool vg twi-aotz-- 100.00m 0.00 1.07 thin,pool private", "lvs -a -o+layout,role -S 'lv_layout=[sparse,thin]' LV VG Attr LSize Pool Origin Data% Layout Role lvol1 vg Vwi-a-tz-- 1.00g pool 0.00 thin,sparse public lvol2 vg Vwi-a-tz-- 1.00g pool 0.00 thin,sparse public,origin,thinorigin lvol3 vg Vwi---tz-k 1.00g pool lvol2 thin,sparse public,snapshot,thinsnapshot", "lvs -a -o lv_name -S 'lv_layout=[sparse,thin]' LV lvol1 lvol2 lvol3" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/logical_volume_manager_administration/selection_display_examples
Red Hat OpenShift Data Foundation architecture
Red Hat OpenShift Data Foundation architecture Red Hat OpenShift Data Foundation 4.13 Overview of OpenShift Data Foundation architecture and the roles that the components and services perform. Red Hat Storage Documentation Team [email protected] Abstract This document provides an overview of the OpenShift Data Foundation architecture.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/red_hat_openshift_data_foundation_architecture/index
Chapter 1. Migration Overview
Chapter 1. Migration Overview When you specify a backup file during self-hosted engine deployment, the Manager backup is restored on a new virtual machine, with a dedicated self-hosted engine storage domain. Deploying on a fresh host is highly recommended; if the host used for deployment existed in the backed up environment, it will be removed from the restored database to avoid conflicts in the new environment. If you deploy on a new host, you must assign a unique name to the host. Reusing the name of an existing host included in the backup can cause conflicts in the new environment. At least two self-hosted engine nodes are required for the Manager virtual machine to be highly available. You can add new nodes, or convert existing hosts. The migration involves the following key steps: Install a new host to deploy the self-hosted engine on. You can use either host type: Red Hat Virtualization Host Red Hat Enterprise Linux Prepare storage for the self-hosted engine storage domain. You can use one of the following storage types: NFS iSCSI Fibre Channel (FCP) Red Hat Gluster Storage Update the original Manager to the latest minor version before you back it up. Back up the original Manager using the engine-backup tool. Deploy a new self-hosted engine and restore the backup. Enable the Manager repositories on the new Manager virtual machine. Convert regular hosts to self-hosted engine nodes that can host the new Manager. This procedure assumes that you have access and can make changes to the original Manager. Prerequisites FQDNs prepared for your Manager and the deployment host. Forward and reverse lookup records must both be set in the DNS. The new Manager must have the same FQDN as the original Manager. The management network ( ovirtmgmt by default) must be configured as a VM network , so that it can manage the Manager virtual machine.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/migrating_from_a_standalone_manager_to_a_self-hosted_engine/migration_overview
Chapter 4. Setting up Enterprise Security Client
Chapter 4. Setting up Enterprise Security Client The following sections contain basic instructions on using the Enterprise Security Client for token enrollment, formatting, and password reset operations. 4.1. Installing the Smart Card Package Group Packages used to manage smart cards, such as esc , should already be installed on the Red Hat Enterprise Linux system. If the packages are not installed or need to be updated, all of the smart card-related packages can be pulled in by installing the Smart card support package group. For example:
[ "groupinstall \"Smart card support\"" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/managing_smart_cards/Using_the_Enterprise_Security_Client
4.6. iSCSI and DM Multipath overrides
4.6. iSCSI and DM Multipath overrides The recovery_tmo sysfs option controls the timeout for a particular iSCSI device. The following options globally override recovery_tmo values: The replacement_timeout configuration option globally overrides the recovery_tmo value for all iSCSI devices. For all iSCSI devices that are managed by DM Multipath, the fast_io_fail_tmo option in DM Multipath globally overrides the recovery_tmo value. The fast_io_fail_tmo option in DM Multipath also overrides the fast_io_fail_tmo option in Fibre Channel devices. The DM Multipath fast_io_fail_tmo option takes precedence over replacement_timeout . Red Hat does not recommend using replacement_timeout to override recovery_tmo in devices managed by DM Multipath because DM Multipath always resets recovery_tmo when the multipathd service reloads.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/dm_multipath/iscsi-and-dm-multipath-overrides
18.2. Consoles and Logging During the Installation
18.2. Consoles and Logging During the Installation The following sections describe how to access logs and an interactive shell during the installation. This is useful when troubleshooting problems, but should not be necessary in most cases. 18.2.1. Accessing Consoles The Red Hat Enterprise Linux installer uses the tmux terminal multiplexer to display and control several windows you can use in addition to the main interface. Each of these windows serves a different purpose - they display several different logs, which can be used to troubleshoot any issues during the installation, and one of the windows provides an interactive shell prompt with root privileges, unless this prompt was specifically disabled using a boot option or a Kickstart command. Note In general, there is no reason to leave the default graphical installation environment unless you need to diagnose an installation problem. The terminal multiplexer is running in virtual console 1. To switch from the graphical installation environment to tmux , press Ctrl + Alt + F1 . To go back to the main installation interface which runs in virtual console 6, press Ctrl + Alt + F6 . Note If you choose text mode installation, you will start in virtual console 1 ( tmux ), and switching to console 6 will open a shell prompt instead of a graphical interface. The console running tmux has 5 available windows; their contents are described in the table below, along with keyboard shortcuts used to access them. Note that the keyboard shortcuts are two-part: first press Ctrl + b , then release both keys, and press the number key for the window you want to use. You can also use Ctrl + b n and Ctrl + b p to switch to the or tmux window, respectively. Table 18.1. Available tmux Windows Shortcut Contents Ctrl + b 1 Main installation program window. Contains text-based prompts (during text mode installation or if you use VNC Direct Mode), and also some debugging information. Ctrl + b 2 Interactive shell prompt with root privileges. Ctrl + b 3 Installation log; displays messages stored in /tmp/anaconda.log . Ctrl + b 4 Storage log; displays messages related storage devices from kernel and system services, stored in /tmp/storage.log . Ctrl + b 5 Program log; displays messages from other system utilities, stored in /tmp/program.log . In addition to displaying diagnostic information in tmux windows, Anaconda also generates several log files, which can be transferred from the installation system. These log files are described in Table 19.1, "Log Files Generated During the Installation" , and directions for transferring them from the installation system are available in Chapter 19, Troubleshooting Installation on IBM Z . 18.2.2. Saving Screenshots You can press Shift + Print Screen at any time during the graphical installation to capture the current screen. These screenshots are saved to /tmp/anaconda-screenshots/ . Additionally, you can use the autostep --autoscreenshot command in a Kickstart file to capture and save each step of the installation automatically. See Section 27.3.1, "Kickstart Commands and Options" for details.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/installation_guide/sect-consoles-logs-during-installation-s390
Chapter 4. Clustering
Chapter 4. Clustering systemd and pacemaker now coordinate correctly during system shutdown Previously, systemd and pacemaker did not coordinate correctly during system shutdown, which caused pacemaker resources not to be terminated properly. With this update, pacemaker is ordered to stop before dbus and other systemd services that pacemaker started. This allows both pacemaker and the resources that pacemaker manages to shut down properly. The pcs resource move and pcs resource ban commands now display a warning message to clarify the commands' behavior The pcs resource move command and the pcs resource ban commands create location constraints that that effectively ban the resource from running on the current node until the constraint is removed or until the constraint lifetime expires. This behavior had previously not been clear to users. These commands now display a warning message explaining this behavior, and the help screens and documentation for these commands have been clarified. New command to move a Pacemaker resource to its preferred node After a Pacemaker resource has moved, either due to a failover or to an administrator manually moving the node, it will not necessarily move back to its original node even after the circumstances that caused the failover have been corrected. You can now use the pcs resource relocate run command to move a resource to its preferred node, as determined by current cluster status, constraints, location of resources and other settings. You can also use the pcs resource relocate show command to display migrated resources. For information on these commands, see the High Availability Add-On Reference. Simplified method for configuring fencing for redundant power supplies in a cluster When configuring fencing for redundant power supplies, you must ensure that when the power supplies are rebooted both power supplies are turned off before either power supply is turned back on. If the node never completely loses power, the node may not release its resources. This opens up the possibility of nodes accessing these resources simultaneously and corrupting them. Prior to Red Hat Enterprise Linux 7.2, you needed to explicitly configure different versions of the devices which used either the 'on' or 'off' actions. Since Red Hat Enterprise Linux 7.2, it is now only required to define each device once and to specify that both are required to fence the node. For information on configuring fencing for redundant power supplies, see the Fencing: Configuring STONITH chapter of the High Availability Add-On Reference manual. New --port-as-ip option for fencing agents Fence agents used only with single devices required complex configuration in pacemaker. It is now possible to use the --port-as-ip option to enter the IP address in the port option.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.2_release_notes/clustering
Managing smart card authentication
Managing smart card authentication Red Hat Enterprise Linux 9 Configuring and using smart card authentication Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_smart_card_authentication/index
Chapter 44. Bean Component
Chapter 44. Bean Component Available as of Camel version 1.0 The bean: component binds beans to Camel message exchanges. 44.1. URI format Where beanID can be any string which is used to look up the bean in the Registry 44.2. Options The Bean component supports 2 options, which are listed below. Name Description Default Type cache (advanced) If enabled, Camel will cache the result of the first Registry look-up. Cache can be enabled if the bean in the Registry is defined as a singleton scope. Boolean resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The Bean endpoint is configured using URI syntax: with the following path and query parameters: 44.2.1. Path Parameters (1 parameters): Name Description Default Type beanName Required Sets the name of the bean to invoke String 44.2.2. Query Parameters (5 parameters): Name Description Default Type method (producer) Sets the name of the method to invoke on the bean String cache (advanced) If enabled, Camel will cache the result of the first Registry look-up. Cache can be enabled if the bean in the Registry is defined as a singleton scope. Boolean multiParameterArray (advanced) Deprecated How to treat the parameters which are passed from the message body; if it is true, the message body should be an array of parameters. Note: This option is used internally by Camel, and is not intended for end users to use. Deprecation note: This option is used internally by Camel, and is not intended for end users to use. false boolean parameters (advanced) Used for configuring additional properties on the bean Map synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean You can append query options to the URI in the following format, ?option=value&option=value&... 44.3. Using The object instance that is used to consume messages must be explicitly registered with the Registry. For example, if you are using Spring you must define the bean in the Spring configuration, spring.xml ; or if you don't use Spring, by registering the bean in JNDI. Error formatting macro: snippet: java.lang.IndexOutOfBoundsException: Index: 20, Size: 20 Once an endpoint has been registered, you can build Camel routes that use it to process exchanges. A bean: endpoint cannot be defined as the input to the route; i.e. you cannot consume from it, you can only route from some inbound message Endpoint to the bean endpoint as output. So consider using a direct: or queue: endpoint as the input. You can use the createProxy() methods on ProxyHelper to create a proxy that will generate BeanExchanges and send them to any endpoint: And the same route using Spring DSL: <route> <from uri="direct:hello"> <to uri="bean:bye"/> </route> 44.4. Bean as endpoint Camel also supports invoking Bean as an Endpoint. In the route below: What happens is that when the exchange is routed to the myBean Camel will use the Bean Binding to invoke the bean. The source for the bean is just a plain POJO: Camel will use Bean Binding to invoke the sayHello method, by converting the Exchange's In body to the String type and storing the output of the method on the Exchange Out body. 44.5. Java DSL bean syntax Java DSL comes with syntactic sugar for the Bean component. Instead of specifying the bean explicitly as the endpoint (i.e. to("bean:beanName") ) you can use the following syntax: // Send message to the bean endpoint // and invoke method resolved using Bean Binding. from("direct:start").beanRef("beanName"); // Send message to the bean endpoint // and invoke given method. from("direct:start").beanRef("beanName", "methodName"); Instead of passing name of the reference to the bean (so that Camel will lookup for it in the registry), you can specify the bean itself: // Send message to the given bean instance. from("direct:start").bean(new ExampleBean()); // Explicit selection of bean method to be invoked. from("direct:start").bean(new ExampleBean(), "methodName"); // Camel will create the instance of bean and cache it for you. from("direct:start").bean(ExampleBean.class); 44.6. Bean Binding How bean methods to be invoked are chosen (if they are not specified explicitly through the method parameter) and how parameter values are constructed from the Message are all defined by the Bean Binding mechanism which is used throughout all of the various Bean Integration mechanisms in Camel. 44.7. See Also Configuring Camel Component Endpoint Getting Started Class component Bean Binding Bean Integration
[ "bean:beanName[?options]", "bean:beanName", "<route> <from uri=\"direct:hello\"> <to uri=\"bean:bye\"/> </route>", "// Send message to the bean endpoint // and invoke method resolved using Bean Binding. from(\"direct:start\").beanRef(\"beanName\"); // Send message to the bean endpoint // and invoke given method. from(\"direct:start\").beanRef(\"beanName\", \"methodName\");", "// Send message to the given bean instance. from(\"direct:start\").bean(new ExampleBean()); // Explicit selection of bean method to be invoked. from(\"direct:start\").bean(new ExampleBean(), \"methodName\"); // Camel will create the instance of bean and cache it for you. from(\"direct:start\").bean(ExampleBean.class);" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/bean-component
Chapter 3. Using the Cluster Samples Operator with an alternate registry
Chapter 3. Using the Cluster Samples Operator with an alternate registry You can use the Cluster Samples Operator with an alternate registry by first creating a mirror registry. Important You must have access to the internet to obtain the necessary container images. In this procedure, you place the mirror registry on a mirror host that has access to both your network and the internet. 3.1. About the mirror registry You can mirror the images that are required for OpenShift Container Platform installation and subsequent product updates to a container mirror registry such as Red Hat Quay, JFrog Artifactory, Sonatype Nexus Repository, or Harbor. If you do not have access to a large-scale container registry, you can use the mirror registry for Red Hat OpenShift , a small-scale container registry included with OpenShift Container Platform subscriptions. You can use any container registry that supports Docker v2-2 , such as Red Hat Quay, the mirror registry for Red Hat OpenShift , Artifactory, Sonatype Nexus Repository, or Harbor. Regardless of your chosen registry, the procedure to mirror content from Red Hat hosted sites on the internet to an isolated image registry is the same. After you mirror the content, you configure each cluster to retrieve this content from your mirror registry. Important The OpenShift image registry cannot be used as the target registry because it does not support pushing without a tag, which is required during the mirroring process. If choosing a container registry that is not the mirror registry for Red Hat OpenShift , it must be reachable by every machine in the clusters that you provision. If the registry is unreachable, installation, updating, or normal operations such as workload relocation might fail. For that reason, you must run mirror registries in a highly available way, and the mirror registries must at least match the production availability of your OpenShift Container Platform clusters. When you populate your mirror registry with OpenShift Container Platform images, you can follow two scenarios. If you have a host that can access both the internet and your mirror registry, but not your cluster nodes, you can directly mirror the content from that machine. This process is referred to as connected mirroring . If you have no such host, you must mirror the images to a file system and then bring that host or removable media into your restricted environment. This process is referred to as disconnected mirroring . For mirrored registries, to view the source of pulled images, you must review the Trying to access log entry in the CRI-O logs. Other methods to view the image pull source, such as using the crictl images command on a node, show the non-mirrored image name, even though the image is pulled from the mirrored location. Note Red Hat does not test third party registries with OpenShift Container Platform. Additional information For information on viewing the CRI-O logs to view the image source, see Viewing the image pull source . 3.1.1. Preparing the mirror host Before you create the mirror registry, you must prepare the mirror host. 3.1.2. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.14. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.14 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 3.2. Configuring credentials that allow images to be mirrored Create a container image registry credentials file that allows mirroring images from Red Hat to your mirror. Prerequisites You configured a mirror registry to use in your disconnected environment. Procedure Complete the following steps on the installation host: Download your registry.redhat.io pull secret from Red Hat OpenShift Cluster Manager . Make a copy of your pull secret in JSON format: USD cat ./pull-secret | jq . > <path>/<pull_secret_file_in_json> 1 1 Specify the path to the folder to store the pull secret in and a name for the JSON file that you create. The contents of the file resemble the following example: { "auths": { "cloud.openshift.com": { "auth": "b3BlbnNo...", "email": "[email protected]" }, "quay.io": { "auth": "b3BlbnNo...", "email": "[email protected]" }, "registry.connect.redhat.com": { "auth": "NTE3Njg5Nj...", "email": "[email protected]" }, "registry.redhat.io": { "auth": "NTE3Njg5Nj...", "email": "[email protected]" } } } Generate the base64-encoded user name and password or token for your mirror registry: USD echo -n '<user_name>:<password>' | base64 -w0 1 BGVtbYk3ZHAtqXs= 1 For <user_name> and <password> , specify the user name and password that you configured for your registry. Edit the JSON file and add a section that describes your registry to it: "auths": { "<mirror_registry>": { 1 "auth": "<credentials>", 2 "email": "[email protected]" } }, 1 For <mirror_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example, registry.example.com or registry.example.com:8443 2 For <credentials> , specify the base64-encoded user name and password for the mirror registry. The file resembles the following example: { "auths": { "registry.example.com": { "auth": "BGVtbYk3ZHAtqXs=", "email": "[email protected]" }, "cloud.openshift.com": { "auth": "b3BlbnNo...", "email": "[email protected]" }, "quay.io": { "auth": "b3BlbnNo...", "email": "[email protected]" }, "registry.connect.redhat.com": { "auth": "NTE3Njg5Nj...", "email": "[email protected]" }, "registry.redhat.io": { "auth": "NTE3Njg5Nj...", "email": "[email protected]" } } } 3.3. Mirroring the OpenShift Container Platform image repository Mirror the OpenShift Container Platform image repository to your registry to use during cluster installation or upgrade. Prerequisites Your mirror host has access to the internet. You configured a mirror registry to use in your restricted network and can access the certificate and credentials that you configured. You downloaded the pull secret from Red Hat OpenShift Cluster Manager and modified it to include authentication to your mirror repository. If you use self-signed certificates, you have specified a Subject Alternative Name in the certificates. Procedure Complete the following steps on the mirror host: Review the OpenShift Container Platform downloads page to determine the version of OpenShift Container Platform that you want to install and determine the corresponding tag on the Repository Tags page. Set the required environment variables: Export the release version: USD OCP_RELEASE=<release_version> For <release_version> , specify the tag that corresponds to the version of OpenShift Container Platform to install, such as 4.5.4 . Export the local registry name and host port: USD LOCAL_REGISTRY='<local_registry_host_name>:<local_registry_host_port>' For <local_registry_host_name> , specify the registry domain name for your mirror repository, and for <local_registry_host_port> , specify the port that it serves content on. Export the local repository name: USD LOCAL_REPOSITORY='<local_repository_name>' For <local_repository_name> , specify the name of the repository to create in your registry, such as ocp4/openshift4 . Export the name of the repository to mirror: USD PRODUCT_REPO='openshift-release-dev' For a production release, you must specify openshift-release-dev . Export the path to your registry pull secret: USD LOCAL_SECRET_JSON='<path_to_pull_secret>' For <path_to_pull_secret> , specify the absolute path to and file name of the pull secret for your mirror registry that you created. Export the release mirror: USD RELEASE_NAME="ocp-release" For a production release, you must specify ocp-release . Export the type of architecture for your cluster: USD ARCHITECTURE=<cluster_architecture> 1 1 Specify the architecture of the cluster, such as x86_64 , aarch64 , s390x , or ppc64le . Export the path to the directory to host the mirrored images: USD REMOVABLE_MEDIA_PATH=<path> 1 1 Specify the full path, including the initial forward slash (/) character. Mirror the version images to the mirror registry: If your mirror host does not have internet access, take the following actions: Connect the removable media to a system that is connected to the internet. Review the images and configuration manifests to mirror: USD oc adm release mirror -a USD{LOCAL_SECRET_JSON} \ --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} \ --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} \ --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --dry-run Record the entire imageContentSources section from the output of the command. The information about your mirrors is unique to your mirrored repository, and you must add the imageContentSources section to the install-config.yaml file during installation. Mirror the images to a directory on the removable media: USD oc adm release mirror -a USD{LOCAL_SECRET_JSON} --to-dir=USD{REMOVABLE_MEDIA_PATH}/mirror quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} Take the media to the restricted network environment and upload the images to the local container registry. USD oc image mirror -a USD{LOCAL_SECRET_JSON} --from-dir=USD{REMOVABLE_MEDIA_PATH}/mirror "file://openshift/release:USD{OCP_RELEASE}*" USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} 1 1 For REMOVABLE_MEDIA_PATH , you must use the same path that you specified when you mirrored the images. Important Running oc image mirror might result in the following error: error: unable to retrieve source image . This error occurs when image indexes include references to images that no longer exist on the image registry. Image indexes might retain older references to allow users running those images an upgrade path to newer points on the upgrade graph. As a temporary workaround, you can use the --skip-missing option to bypass the error and continue downloading the image index. For more information, see Service Mesh Operator mirroring failed . If the local container registry is connected to the mirror host, take the following actions: Directly push the release images to the local registry by using following command: USD oc adm release mirror -a USD{LOCAL_SECRET_JSON} \ --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} \ --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} \ --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} This command pulls the release information as a digest, and its output includes the imageContentSources data that you require when you install your cluster. Record the entire imageContentSources section from the output of the command. The information about your mirrors is unique to your mirrored repository, and you must add the imageContentSources section to the install-config.yaml file during installation. Note The image name gets patched to Quay.io during the mirroring process, and the podman images will show Quay.io in the registry on the bootstrap virtual machine. To create the installation program that is based on the content that you mirrored, extract it and pin it to the release: If your mirror host does not have internet access, run the following command: USD oc adm release extract -a USD{LOCAL_SECRET_JSON} --icsp-file=<file> --command=openshift-install "USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}" If the local container registry is connected to the mirror host, run the following command: USD oc adm release extract -a USD{LOCAL_SECRET_JSON} --command=openshift-install "USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}" Important To ensure that you use the correct images for the version of OpenShift Container Platform that you selected, you must extract the installation program from the mirrored content. You must perform this step on a machine with an active internet connection. For clusters using installer-provisioned infrastructure, run the following command: USD openshift-install 3.4. Using Cluster Samples Operator image streams with alternate or mirrored registries Most image streams in the openshift namespace managed by the Cluster Samples Operator point to images located in the Red Hat registry at registry.redhat.io . Note The cli , installer , must-gather , and tests image streams, while part of the install payload, are not managed by the Cluster Samples Operator. These are not addressed in this procedure. Important The Cluster Samples Operator must be set to Managed in a disconnected environment. To install the image streams, you have a mirrored registry. Prerequisites Access to the cluster as a user with the cluster-admin role. Create a pull secret for your mirror registry. Procedure Access the images of a specific image stream to mirror, for example: USD oc get is <imagestream> -n openshift -o json | jq .spec.tags[].from.name | grep registry.redhat.io Mirror images from registry.redhat.io associated with any image streams you need USD oc image mirror registry.redhat.io/rhscl/ruby-25-rhel7:latest USD{MIRROR_ADDR}/rhscl/ruby-25-rhel7:latest Create the cluster's image configuration object: USD oc create configmap registry-config --from-file=USD{MIRROR_ADDR_HOSTNAME}..5000=USDpath/ca.crt -n openshift-config Add the required trusted CAs for the mirror in the cluster's image configuration object: USD oc patch image.config.openshift.io/cluster --patch '{"spec":{"additionalTrustedCA":{"name":"registry-config"}}}' --type=merge Update the samplesRegistry field in the Cluster Samples Operator configuration object to contain the hostname portion of the mirror location defined in the mirror configuration: USD oc edit configs.samples.operator.openshift.io -n openshift-cluster-samples-operator Note This is required because the image stream import process does not use the mirror or search mechanism at this time. Add any image streams that are not mirrored into the skippedImagestreams field of the Cluster Samples Operator configuration object. Or if you do not want to support any of the sample image streams, set the Cluster Samples Operator to Removed in the Cluster Samples Operator configuration object. Note The Cluster Samples Operator issues alerts if image stream imports are failing but the Cluster Samples Operator is either periodically retrying or does not appear to be retrying them. Many of the templates in the openshift namespace reference the image streams. So using Removed to purge both the image streams and templates will eliminate the possibility of attempts to use them if they are not functional because of any missing image streams. 3.4.1. Cluster Samples Operator assistance for mirroring During installation, OpenShift Container Platform creates a config map named imagestreamtag-to-image in the openshift-cluster-samples-operator namespace. The imagestreamtag-to-image config map contains an entry, the populating image, for each image stream tag. The format of the key for each entry in the data field in the config map is <image_stream_name>_<image_stream_tag_name> . During a disconnected installation of OpenShift Container Platform, the status of the Cluster Samples Operator is set to Removed . If you choose to change it to Managed , it installs samples. Note The use of samples in a network-restricted or discontinued environment may require access to services external to your network. Some example services include: Github, Maven Central, npm, RubyGems, PyPi and others. There might be additional steps to take that allow the cluster samples operators's objects to reach the services they require. You can use this config map as a reference for which images need to be mirrored for your image streams to import. While the Cluster Samples Operator is set to Removed , you can create your mirrored registry, or determine which existing mirrored registry you want to use. Mirror the samples you want to the mirrored registry using the new config map as your guide. Add any of the image streams you did not mirror to the skippedImagestreams list of the Cluster Samples Operator configuration object. Set samplesRegistry of the Cluster Samples Operator configuration object to the mirrored registry. Then set the Cluster Samples Operator to Managed to install the image streams you have mirrored. See Using Cluster Samples Operator image streams with alternate or mirrored registries for a detailed procedure.
[ "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "cat ./pull-secret | jq . > <path>/<pull_secret_file_in_json> 1", "{ \"auths\": { \"cloud.openshift.com\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"quay.io\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"registry.connect.redhat.com\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" }, \"registry.redhat.io\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" } } }", "echo -n '<user_name>:<password>' | base64 -w0 1 BGVtbYk3ZHAtqXs=", "\"auths\": { \"<mirror_registry>\": { 1 \"auth\": \"<credentials>\", 2 \"email\": \"[email protected]\" } },", "{ \"auths\": { \"registry.example.com\": { \"auth\": \"BGVtbYk3ZHAtqXs=\", \"email\": \"[email protected]\" }, \"cloud.openshift.com\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"quay.io\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"registry.connect.redhat.com\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" }, \"registry.redhat.io\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" } } }", "OCP_RELEASE=<release_version>", "LOCAL_REGISTRY='<local_registry_host_name>:<local_registry_host_port>'", "LOCAL_REPOSITORY='<local_repository_name>'", "PRODUCT_REPO='openshift-release-dev'", "LOCAL_SECRET_JSON='<path_to_pull_secret>'", "RELEASE_NAME=\"ocp-release\"", "ARCHITECTURE=<cluster_architecture> 1", "REMOVABLE_MEDIA_PATH=<path> 1", "oc adm release mirror -a USD{LOCAL_SECRET_JSON} --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --dry-run", "oc adm release mirror -a USD{LOCAL_SECRET_JSON} --to-dir=USD{REMOVABLE_MEDIA_PATH}/mirror quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE}", "oc image mirror -a USD{LOCAL_SECRET_JSON} --from-dir=USD{REMOVABLE_MEDIA_PATH}/mirror \"file://openshift/release:USD{OCP_RELEASE}*\" USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} 1", "oc adm release mirror -a USD{LOCAL_SECRET_JSON} --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}", "oc adm release extract -a USD{LOCAL_SECRET_JSON} --icsp-file=<file> --command=openshift-install \"USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}\"", "oc adm release extract -a USD{LOCAL_SECRET_JSON} --command=openshift-install \"USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}\"", "openshift-install", "oc get is <imagestream> -n openshift -o json | jq .spec.tags[].from.name | grep registry.redhat.io", "oc image mirror registry.redhat.io/rhscl/ruby-25-rhel7:latest USD{MIRROR_ADDR}/rhscl/ruby-25-rhel7:latest", "oc create configmap registry-config --from-file=USD{MIRROR_ADDR_HOSTNAME}..5000=USDpath/ca.crt -n openshift-config", "oc patch image.config.openshift.io/cluster --patch '{\"spec\":{\"additionalTrustedCA\":{\"name\":\"registry-config\"}}}' --type=merge", "oc edit configs.samples.operator.openshift.io -n openshift-cluster-samples-operator" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/images/samples-operator-alt-registry
Preface
Preface Red Hat Quay container registry platform provides secure storage, distribution, and governance of containers and cloud-native artifacts on any infrastructure. It is available as a standalone component or as an Operator on OpenShift Container Platform. Red Hat Quay includes the following features and benefits: Granular security management Fast and robust at any scale High velocity CI/CD Automated installation and upates Enterprise authentication and team-based access control OpenShift Container Platform integration Red Hat Quay is regularly released, containing new features, bug fixes, and software updates. To upgrade Red Hat Quay for both standalone and OpenShift Container Platform deployments, see Upgrade Red Hat Quay . Important Red Hat Quay only supports rolling back, or downgrading, to z-stream versions, for example, 3.7.2 3.7.1. Rolling back to y-stream versions (3.7.0 3.6.0) is not supported. This is because Red Hat Quay updates might contain database schema upgrades that are applied when upgrading to a new version of Red Hat Quay. Database schema upgrades are not considered backwards compatible. Downgrading to z-streams is neither recommended nor supported by either Operator based deployments or virtual machine based deployments. Downgrading should only be done in extreme circumstances. The decision to rollback your Red Hat Quay deployment must be made in conjunction with the Red Hat Quay support and development teams. For more information, contact Red Hat Quay support. Documentation for Red Hat Quay is versioned with each release. The latest Red Hat Quay documentation is available from the Red Hat Quay Documentation page. Currently, version 3 is the latest major version. Note Prior to version 2.9.2, Red Hat Quay was called Quay Enterprise. Documentation for 2.9.2 and prior versions are archived on the Product Documentation for Red Hat Quay 2.9 page.
null
https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html/red_hat_quay_release_notes/pr01
Chapter 12. The NVIDIA GPU administration dashboard
Chapter 12. The NVIDIA GPU administration dashboard 12.1. Introduction The OpenShift Console NVIDIA GPU plugin is a dedicated administration dashboard for NVIDIA GPU usage visualization in the OpenShift Container Platform (OCP) Console. The visualizations in the administration dashboard provide guidance on how to best optimize GPU resources in clusters, such as when a GPU is under- or over-utilized. The OpenShift Console NVIDIA GPU plugin works as a remote bundle for the OCP console. To run the plugin the OCP console must be running. 12.2. Installing the NVIDIA GPU administration dashboard Install the NVIDIA GPU plugin by using Helm on the OpenShift Container Platform (OCP) Console to add GPU capabilities. The OpenShift Console NVIDIA GPU plugin works as a remote bundle for the OCP console. To run the OpenShift Console NVIDIA GPU plugin an instance of the OCP console must be running. Prerequisites Red Hat OpenShift 4.11+ NVIDIA GPU operator Helm Procedure Use the following procedure to install the OpenShift Console NVIDIA GPU plugin. Add the Helm repository: USD helm repo add rh-ecosystem-edge https://rh-ecosystem-edge.github.io/console-plugin-nvidia-gpu USD helm repo update Install the Helm chart in the default NVIDIA GPU operator namespace: USD helm install -n nvidia-gpu-operator console-plugin-nvidia-gpu rh-ecosystem-edge/console-plugin-nvidia-gpu Example output NAME: console-plugin-nvidia-gpu LAST DEPLOYED: Tue Aug 23 15:37:35 2022 NAMESPACE: nvidia-gpu-operator STATUS: deployed REVISION: 1 NOTES: View the Console Plugin NVIDIA GPU deployed resources by running the following command: USD oc -n {{ .Release.Namespace }} get all -l app.kubernetes.io/name=console-plugin-nvidia-gpu Enable the plugin by running the following command: # Check if a plugins field is specified USD oc get consoles.operator.openshift.io cluster --output=jsonpath="{.spec.plugins}" # if not, then run the following command to enable the plugin USD oc patch consoles.operator.openshift.io cluster --patch '{ "spec": { "plugins": ["console-plugin-nvidia-gpu"] } }' --type=merge # if yes, then run the following command to enable the plugin USD oc patch consoles.operator.openshift.io cluster --patch '[{"op": "add", "path": "/spec/plugins/-", "value": "console-plugin-nvidia-gpu" }]' --type=json # add the required DCGM Exporter metrics ConfigMap to the existing NVIDIA operator ClusterPolicy CR: oc patch clusterpolicies.nvidia.com gpu-cluster-policy --patch '{ "spec": { "dcgmExporter": { "config": { "name": "console-plugin-nvidia-gpu" } } } }' --type=merge The dashboard relies mostly on Prometheus metrics exposed by the NVIDIA DCGM Exporter, but the default exposed metrics are not enough for the dashboard to render the required gauges. Therefore, the DGCM exporter is configured to expose a custom set of metrics, as shown here. apiVersion: v1 data: dcgm-metrics.csv: | DCGM_FI_PROF_GR_ENGINE_ACTIVE, gauge, gpu utilization. DCGM_FI_DEV_MEM_COPY_UTIL, gauge, mem utilization. DCGM_FI_DEV_ENC_UTIL, gauge, enc utilization. DCGM_FI_DEV_DEC_UTIL, gauge, dec utilization. DCGM_FI_DEV_POWER_USAGE, gauge, power usage. DCGM_FI_DEV_POWER_MGMT_LIMIT_MAX, gauge, power mgmt limit. DCGM_FI_DEV_GPU_TEMP, gauge, gpu temp. DCGM_FI_DEV_SM_CLOCK, gauge, sm clock. DCGM_FI_DEV_MAX_SM_CLOCK, gauge, max sm clock. DCGM_FI_DEV_MEM_CLOCK, gauge, mem clock. DCGM_FI_DEV_MAX_MEM_CLOCK, gauge, max mem clock. kind: ConfigMap metadata: annotations: meta.helm.sh/release-name: console-plugin-nvidia-gpu meta.helm.sh/release-namespace: nvidia-gpu-operator creationTimestamp: "2022-10-26T19:46:41Z" labels: app.kubernetes.io/component: console-plugin-nvidia-gpu app.kubernetes.io/instance: console-plugin-nvidia-gpu app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: console-plugin-nvidia-gpu app.kubernetes.io/part-of: console-plugin-nvidia-gpu app.kubernetes.io/version: latest helm.sh/chart: console-plugin-nvidia-gpu-0.2.3 name: console-plugin-nvidia-gpu namespace: nvidia-gpu-operator resourceVersion: "19096623" uid: 96cdf700-dd27-437b-897d-5cbb1c255068 Install the ConfigMap and edit the NVIDIA Operator ClusterPolicy CR to add that ConfigMap in the DCGM exporter configuration. The installation of the ConfigMap is done by the new version of the Console Plugin NVIDIA GPU Helm Chart, but the ClusterPolicy CR editing is done by the user. View the deployed resources: USD oc -n nvidia-gpu-operator get all -l app.kubernetes.io/name=console-plugin-nvidia-gpu Example output NAME READY STATUS RESTARTS AGE pod/console-plugin-nvidia-gpu-7dc9cfb5df-ztksx 1/1 Running 0 2m6s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/console-plugin-nvidia-gpu ClusterIP 172.30.240.138 <none> 9443/TCP 2m6s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/console-plugin-nvidia-gpu 1/1 1 1 2m6s NAME DESIRED CURRENT READY AGE replicaset.apps/console-plugin-nvidia-gpu-7dc9cfb5df 1 1 1 2m6s 12.3. Using the NVIDIA GPU administration dashboard After deploying the OpenShift Console NVIDIA GPU plugin, log in to the OpenShift Container Platform web console using your login credentials to access the Administrator perspective. To view the changes, you need to refresh the console to see the GPUs tab under Compute . 12.3.1. Viewing the cluster GPU overview You can view the status of your cluster GPUs in the Overview page by selecting Overview in the Home section. The Overview page provides information about the cluster GPUs, including: Details about the GPU providers Status of the GPUs Cluster utilization of the GPUs 12.3.2. Viewing the GPUs dashboard You can view the NVIDIA GPU administration dashboard by selecting GPUs in the Compute section of the OpenShift Console. Charts on the GPUs dashboard include: GPU utilization : Shows the ratio of time the graphics engine is active and is based on the DCGM_FI_PROF_GR_ENGINE_ACTIVE metric. Memory utilization : Shows the memory being used by the GPU and is based on the DCGM_FI_DEV_MEM_COPY_UTIL metric. Encoder utilization : Shows the video encoder rate of utilization and is based on the DCGM_FI_DEV_ENC_UTIL metric. Decoder utilization : Encoder utilization : Shows the video decoder rate of utilization and is based on the DCGM_FI_DEV_DEC_UTIL metric. Power consumption : Shows the average power usage of the GPU in Watts and is based on the DCGM_FI_DEV_POWER_USAGE metric. GPU temperature : Shows the current GPU temperature and is based on the DCGM_FI_DEV_GPU_TEMP metric. The maximum is set to 110 , which is an empirical number, as the actual number is not exposed via a metric. GPU clock speed : Shows the average clock speed utilized by the GPU and is based on the DCGM_FI_DEV_SM_CLOCK metric. Memory clock speed : Shows the average clock speed utilized by memory and is based on the DCGM_FI_DEV_MEM_CLOCK metric. 12.3.3. Viewing the GPU Metrics You can view the metrics for the GPUs by selecting the metric at the bottom of each GPU to view the Metrics page. On the Metrics page, you can: Specify a refresh rate for the metrics Add, run, disable, and delete queries Insert Metrics Reset the zoom view
[ "helm repo add rh-ecosystem-edge https://rh-ecosystem-edge.github.io/console-plugin-nvidia-gpu", "helm repo update", "helm install -n nvidia-gpu-operator console-plugin-nvidia-gpu rh-ecosystem-edge/console-plugin-nvidia-gpu", "NAME: console-plugin-nvidia-gpu LAST DEPLOYED: Tue Aug 23 15:37:35 2022 NAMESPACE: nvidia-gpu-operator STATUS: deployed REVISION: 1 NOTES: View the Console Plugin NVIDIA GPU deployed resources by running the following command: oc -n {{ .Release.Namespace }} get all -l app.kubernetes.io/name=console-plugin-nvidia-gpu Enable the plugin by running the following command: Check if a plugins field is specified oc get consoles.operator.openshift.io cluster --output=jsonpath=\"{.spec.plugins}\" if not, then run the following command to enable the plugin oc patch consoles.operator.openshift.io cluster --patch '{ \"spec\": { \"plugins\": [\"console-plugin-nvidia-gpu\"] } }' --type=merge if yes, then run the following command to enable the plugin oc patch consoles.operator.openshift.io cluster --patch '[{\"op\": \"add\", \"path\": \"/spec/plugins/-\", \"value\": \"console-plugin-nvidia-gpu\" }]' --type=json add the required DCGM Exporter metrics ConfigMap to the existing NVIDIA operator ClusterPolicy CR: patch clusterpolicies.nvidia.com gpu-cluster-policy --patch '{ \"spec\": { \"dcgmExporter\": { \"config\": { \"name\": \"console-plugin-nvidia-gpu\" } } } }' --type=merge", "apiVersion: v1 data: dcgm-metrics.csv: | DCGM_FI_PROF_GR_ENGINE_ACTIVE, gauge, gpu utilization. DCGM_FI_DEV_MEM_COPY_UTIL, gauge, mem utilization. DCGM_FI_DEV_ENC_UTIL, gauge, enc utilization. DCGM_FI_DEV_DEC_UTIL, gauge, dec utilization. DCGM_FI_DEV_POWER_USAGE, gauge, power usage. DCGM_FI_DEV_POWER_MGMT_LIMIT_MAX, gauge, power mgmt limit. DCGM_FI_DEV_GPU_TEMP, gauge, gpu temp. DCGM_FI_DEV_SM_CLOCK, gauge, sm clock. DCGM_FI_DEV_MAX_SM_CLOCK, gauge, max sm clock. DCGM_FI_DEV_MEM_CLOCK, gauge, mem clock. DCGM_FI_DEV_MAX_MEM_CLOCK, gauge, max mem clock. kind: ConfigMap metadata: annotations: meta.helm.sh/release-name: console-plugin-nvidia-gpu meta.helm.sh/release-namespace: nvidia-gpu-operator creationTimestamp: \"2022-10-26T19:46:41Z\" labels: app.kubernetes.io/component: console-plugin-nvidia-gpu app.kubernetes.io/instance: console-plugin-nvidia-gpu app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: console-plugin-nvidia-gpu app.kubernetes.io/part-of: console-plugin-nvidia-gpu app.kubernetes.io/version: latest helm.sh/chart: console-plugin-nvidia-gpu-0.2.3 name: console-plugin-nvidia-gpu namespace: nvidia-gpu-operator resourceVersion: \"19096623\" uid: 96cdf700-dd27-437b-897d-5cbb1c255068", "oc -n nvidia-gpu-operator get all -l app.kubernetes.io/name=console-plugin-nvidia-gpu", "NAME READY STATUS RESTARTS AGE pod/console-plugin-nvidia-gpu-7dc9cfb5df-ztksx 1/1 Running 0 2m6s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/console-plugin-nvidia-gpu ClusterIP 172.30.240.138 <none> 9443/TCP 2m6s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/console-plugin-nvidia-gpu 1/1 1 1 2m6s NAME DESIRED CURRENT READY AGE replicaset.apps/console-plugin-nvidia-gpu-7dc9cfb5df 1 1 1 2m6s" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/monitoring/nvidia-gpu-admin-dashboard
Chapter 2. Accessing Red Hat Satellite
Chapter 2. Accessing Red Hat Satellite After Red Hat Satellite has been installed and configured, use a browser to log in to the Satellite web UI interface. From the Satellite web UI, you can manage and monitor your Satellite infrastructure. 2.1. Logging in to the Satellite web UI Use the web user interface to log in to Satellite for further configuration. Prerequisites Ensure that the Katello root CA certificate is installed in your browser. For more information, see Section 2.2, "Importing the Katello root CA certificate" . Procedure Access Satellite Server using a web browser pointed to the fully qualified domain name: Enter the user name and password created during the configuration process. If a user was not created during the configuration process, the default user name is admin . steps If you have problems logging in, you can reset the password. For more information, see Section 2.3, "Resetting the administrative user password" . 2.2. Importing the Katello root CA certificate The first time you log in to Satellite, you might see a warning informing you that you are using the default self-signed certificate and you might not be able to connect this browser to Satellite until the root CA certificate is imported in the browser. Use the following procedure to locate the root CA certificate on Satellite and to import it into your browser. To use the CLI instead of the Satellite web UI, see CLI Procedure . Prerequisites Your Red Hat Satellite is installed and configured. Procedure Identify the fully qualified domain name of your Satellite Server: Access the pub directory on your Satellite Server using a web browser pointed to the fully qualified domain name: When you access Satellite for the first time, an untrusted connection warning displays in your web browser. Accept the self-signed certificate and add the Satellite URL as a security exception to override the settings. This procedure might differ depending on the browser being used. Ensure that the Satellite URL is valid before you accept the security exception. Select katello-server-ca.crt . Import the certificate into your browser as a certificate authority and trust it to identify websites. CLI procedure From the Satellite CLI, copy the katello-server-ca.crt file to the machine you use to access the Satellite web UI: In the browser, import the katello-server-ca.crt certificate as a certificate authority and trust it to identify websites. 2.3. Resetting the administrative user password Use the following procedures to reset the administrative password to randomly generated characters or to set a new administrative password. To reset the administrative user password Log in to the base operating system where Satellite Server is installed. Enter the following command to reset the password: Use this password to reset the password in the Satellite web UI. Edit the ~/.hammer/cli.modules.d/foreman.yml file on Satellite Server to add the new password: Unless you update the ~/.hammer/cli.modules.d/foreman.yml file, you cannot use the new password with Hammer CLI. To set a new administrative user password Log in to the base operating system where Satellite Server is installed. To set the password, enter the following command: Edit the ~/.hammer/cli.modules.d/foreman.yml file on Satellite Server to add the new password: Unless you update the ~/.hammer/cli.modules.d/foreman.yml file, you cannot use the new password with Hammer CLI. 2.4. Setting a custom message on the Satellite web UI login page You can change the default text on the login page to a custom message you want your users to see every time they access the page. For example, your custom message might be a warning required by your company. Procedure In the Satellite web UI, navigate to Administer > Settings , and click the General tab. Enter your custom message in the Login page footer text field. Click Submit . Verification Log out of the Satellite web UI and verify that the custom message is now displayed on the login page.
[ "https:// satellite.example.com /", "hostname -f", "https:// satellite.example.com /pub", "scp /var/www/html/pub/katello-server-ca.crt username@hostname:remotefile", "foreman-rake permissions:reset Reset to user: admin, password: qwJxBptxb7Gfcjj5", "vi ~/.hammer/cli.modules.d/foreman.yml", "foreman-rake permissions:reset password= new_password", "vi ~/.hammer/cli.modules.d/foreman.yml" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/configuring_authentication_for_red_hat_satellite_users/Accessing_Server_authentication
Chapter 132. KafkaMirrorMakerStatus schema reference
Chapter 132. KafkaMirrorMakerStatus schema reference Used in: KafkaMirrorMaker Property Property type Description conditions Condition array List of status conditions. observedGeneration integer The generation of the CRD that was last reconciled by the operator. labelSelector string Label selector for pods providing this resource. replicas integer The current number of pods being used to provide this resource.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-KafkaMirrorMakerStatus-reference
4.8. The file_t and default_t Types
4.8. The file_t and default_t Types When using a file system that supports extended attributes (EA), the file_t type is the default type of a file that has not yet been assigned EA value. This type is only used for this purpose and does not exist on correctly-labeled file systems, because all files on a system running SELinux should have a proper SELinux context, and the file_t type is never used in file-context configuration [4] . The default_t type is used on files that do not match any pattern in file-context configuration, so that such files can be distinguished from files that do not have a context on disk, and generally are kept inaccessible to confined domains. For example, if you create a new top-level directory, such as mydirectory/ , this directory may be labeled with the default_t type. If services need access to this directory, you need to update the file-contexts configuration for this location. See Section 4.7.2, "Persistent Changes: semanage fcontext" for details on adding a context to the file-context configuration. [4] Files in the /etc/selinux/targeted/contexts/files/ directory define contexts for files and directories. Files in this directory are read by the restorecon and setfiles utilities to restore files and directories to their default contexts.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/selinux_users_and_administrators_guide/sect-security-enhanced_linux-working_with_selinux-the_file_t_and_default_t_types
Chapter 4. Alerts
Chapter 4. Alerts 4.1. Setting up alerts For internal Mode clusters, various alerts related to the storage metrics services, storage cluster, disk devices, cluster health, cluster capacity, and so on are displayed in the Block and File, and the object dashboards. These alerts are not available for external Mode. Note It might take a few minutes for alerts to be shown in the alert panel, because only firing alerts are visible in this panel. You can also view alerts with additional details and customize the display of Alerts in the OpenShift Container Platform. For more information, see Managing alerts .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/monitoring_openshift_data_foundation/alerts
Chapter 11. Using clustered counters
Chapter 11. Using clustered counters Data Grid provides counters that record the count of objects and are distributed across all nodes in a cluster. 11.1. Clustered Counters Clustered counters are counters which are distributed and shared among all nodes in the Data Grid cluster. Counters can have different consistency levels: strong and weak. Although a strong/weak consistent counter has separate interfaces, both support updating its value, return the current value and they provide events when its value is updated. Details are provided below in this document to help you choose which one fits best your uses-case. 11.1.1. Installation and Configuration In order to start using the counters, you needs to add the dependency in your Maven pom.xml file: pom.xml <dependency> <groupId>org.infinispan</groupId> <artifactId>infinispan-clustered-counter</artifactId> </dependency> The counters can be configured Data Grid configuration file or on-demand via the CounterManager interface detailed later in this document. A counters configured in Data Grid configuration file is created at boot time when the EmbeddedCacheManager is starting. These counters are started eagerly and they are available in all the cluster's nodes. configuration.xml <infinispan> <cache-container ...> <!-- To persist counters, you need to configure the global state. --> <global-state> <!-- Global state configuration goes here. --> </global-state> <!-- Cache configuration goes here. --> <counters xmlns="urn:infinispan:config:counters:14.0" num-owners="3" reliability="CONSISTENT"> <strong-counter name="c1" initial-value="1" storage="PERSISTENT"/> <strong-counter name="c2" initial-value="2" storage="VOLATILE" lower-bound="0"/> <strong-counter name="c3" initial-value="3" storage="PERSISTENT" upper-bound="5"/> <strong-counter name="c4" initial-value="4" storage="VOLATILE" lower-bound="0" upper-bound="10"/> <strong-counter name="c5" initial-value="0" upper-bound="100" lifespan="60000"/> <weak-counter name="c6" initial-value="5" storage="PERSISTENT" concurrency-level="1"/> </counters> </cache-container> </infinispan> or programmatically, in the GlobalConfigurationBuilder : GlobalConfigurationBuilder globalConfigurationBuilder = ...; CounterManagerConfigurationBuilder builder = globalConfigurationBuilder.addModule(CounterManagerConfigurationBuilder.class); builder.numOwner(3).reliability(Reliability.CONSISTENT); builder.addStrongCounter().name("c1").initialValue(1).storage(Storage.PERSISTENT); builder.addStrongCounter().name("c2").initialValue(2).lowerBound(0).storage(Storage.VOLATILE); builder.addStrongCounter().name("c3").initialValue(3).upperBound(5).storage(Storage.PERSISTENT); builder.addStrongCounter().name("c4").initialValue(4).lowerBound(0).upperBound(10).storage(Storage.VOLATILE); builder.addStrongCounter().name("c5").initialValue(0).upperBound(100).lifespan(60000); builder.addWeakCounter().name("c6").initialValue(5).concurrencyLevel(1).storage(Storage.PERSISTENT); On other hand, the counters can be configured on-demand, at any time after the EmbeddedCacheManager is initialized. CounterManager manager = ...; manager.defineCounter("c1", CounterConfiguration.builder(CounterType.UNBOUNDED_STRONG).initialValue(1).storage(Storage.PERSISTENT).build()); manager.defineCounter("c2", CounterConfiguration.builder(CounterType.BOUNDED_STRONG).initialValue(2).lowerBound(0).storage(Storage.VOLATILE).build()); manager.defineCounter("c3", CounterConfiguration.builder(CounterType.BOUNDED_STRONG).initialValue(3).upperBound(5).storage(Storage.PERSISTENT).build()); manager.defineCounter("c4", CounterConfiguration.builder(CounterType.BOUNDED_STRONG).initialValue(4).lowerBound(0).upperBound(10).storage(Storage.VOLATILE).build()); manager.defineCounter("c4", CounterConfiguration.builder(CounterType.BOUNDED_STRONG).initialValue(0).upperBound(100).lifespan(60000).build()); manager.defineCounter("c6", CounterConfiguration.builder(CounterType.WEAK).initialValue(5).concurrencyLevel(1).storage(Storage.PERSISTENT).build()); Note CounterConfiguration is immutable and can be reused. The method defineCounter() will return true if the counter is successful configured or false otherwise. However, if the configuration is invalid, the method will throw a CounterConfigurationException . To find out if a counter is already defined, use the method isDefined() . CounterManager manager = ... if (!manager.isDefined("someCounter")) { manager.define("someCounter", ...); } Additional resources Data Grid configuration schema reference 11.1.1.1. List counter names To list all the counters defined, the method CounterManager.getCounterNames() returns a collection of all counter names created cluster-wide. 11.1.2. CounterManager interface The CounterManager interface is the entry point to define, retrieve and remove counters. Embedded deployments CounterManager automatically listen to the creation of EmbeddedCacheManager and proceeds with the registration of an instance of it per EmbeddedCacheManager . It starts the caches needed to store the counter state and configures the default counters. Retrieving the CounterManager is as simple as invoke the EmbeddedCounterManagerFactory.asCounterManager(EmbeddedCacheManager) as shown in the example below: // create or obtain your EmbeddedCacheManager EmbeddedCacheManager manager = ...; // retrieve the CounterManager CounterManager counterManager = EmbeddedCounterManagerFactory.asCounterManager(manager); Server deployments For Hot Rod clients, the CounterManager is registered in the RemoteCacheManager and can be retrieved as follows: // create or obtain your RemoteCacheManager RemoteCacheManager manager = ...; // retrieve the CounterManager CounterManager counterManager = RemoteCounterManagerFactory.asCounterManager(manager); 11.1.2.1. Remove a counter via CounterManager There is a difference between remove a counter via the Strong/WeakCounter interfaces and the CounterManager . The CounterManager.remove(String) removes the counter value from the cluster and removes all the listeners registered in the counter in the local counter instance. In addition, the counter instance is no longer reusable and it may return an invalid results. On the other side, the Strong/WeakCounter removal only removes the counter value. The instance can still be reused and the listeners still works. Note The counter is re-created if it is accessed after a removal. 11.1.3. The Counter A counter can be strong ( StrongCounter ) or weakly consistent ( WeakCounter ) and both is identified by a name. They have a specific interface but they share some logic, namely, both of them are asynchronous ( a CompletableFuture is returned by each operation), provide an update event and can be reset to its initial value. If you don't want to use the async API, it is possible to return a synchronous counter via sync() method. The API is the same but without the CompletableFuture return value. The following methods are common to both interfaces: String getName(); CompletableFuture<Long> getValue(); CompletableFuture<Void> reset(); <T extends CounterListener> Handle<T> addListener(T listener); CounterConfiguration getConfiguration(); CompletableFuture<Void> remove(); SyncStrongCounter sync(); //SyncWeakCounter for WeakCounter getName() returns the counter name (identifier). getValue() returns the current counter's value. reset() allows to reset the counter's value to its initial value. addListener() register a listener to receive update events. More details about it in the Notification and Events section. getConfiguration() returns the configuration used by the counter. remove() removes the counter value from the cluster. The instance can still be used and the listeners are kept. sync() creates a synchronous counter. Note The counter is re-created if it is accessed after a removal. 11.1.3.1. The StrongCounter interface: when the consistency or bounds matters. The strong counter provides uses a single key stored in Data Grid cache to provide the consistency needed. All the updates are performed under the key lock to updates its values. On other hand, the reads don't acquire any locks and reads the current value. Also, with this scheme, it allows to bound the counter value and provide atomic operations like compare-and-set/swap. A StrongCounter can be retrieved from the CounterManager by using the getStrongCounter() method. As an example: CounterManager counterManager = ... StrongCounter aCounter = counterManager.getStrongCounter("my-counter"); Warning Since every operation will hit a single key, the StrongCounter has a higher contention rate. The StrongCounter interface adds the following method: default CompletableFuture<Long> incrementAndGet() { return addAndGet(1L); } default CompletableFuture<Long> decrementAndGet() { return addAndGet(-1L); } CompletableFuture<Long> addAndGet(long delta); CompletableFuture<Boolean> compareAndSet(long expect, long update); CompletableFuture<Long> compareAndSwap(long expect, long update); incrementAndGet() increments the counter by one and returns the new value. decrementAndGet() decrements the counter by one and returns the new value. addAndGet() adds a delta to the counter's value and returns the new value. compareAndSet() and compareAndSwap() atomically set the counter's value if the current value is the expected. Note A operation is considered completed when the CompletableFuture is completed. Note The difference between compare-and-set and compare-and-swap is that the former returns true if the operation succeeds while the later returns the value. The compare-and-swap is successful if the return value is the same as the expected. 11.1.3.1.1. Bounded StrongCounter When bounded, all the update method above will throw a CounterOutOfBoundsException when they reached the lower or upper bound. The exception has the following methods to check which side bound has been reached: public boolean isUpperBoundReached(); public boolean isLowerBoundReached(); 11.1.3.1.2. Uses cases The strong counter fits better in the following uses cases: When counter's value is needed after each update (example, cluster-wise ids generator or sequences) When a bounded counter is needed (example, rate limiter) 11.1.3.1.3. Usage Examples StrongCounter counter = counterManager.getStrongCounter("unbounded_counter"); // incrementing the counter System.out.println("new value is " + counter.incrementAndGet().get()); // decrement the counter's value by 100 using the functional API counter.addAndGet(-100).thenApply(v -> { System.out.println("new value is " + v); return null; }).get(); // alternative, you can do some work while the counter is updated CompletableFuture<Long> f = counter.addAndGet(10); // ... do some work ... System.out.println("new value is " + f.get()); // and then, check the current value System.out.println("current value is " + counter.getValue().get()); // finally, reset to initial value counter.reset().get(); System.out.println("current value is " + counter.getValue().get()); // or set to a new value if zero System.out.println("compare and set succeeded? " + counter.compareAndSet(0, 1)); And below, there is another example using a bounded counter: StrongCounter counter = counterManager.getStrongCounter("bounded_counter"); // incrementing the counter try { System.out.println("new value is " + counter.addAndGet(100).get()); } catch (ExecutionException e) { Throwable cause = e.getCause(); if (cause instanceof CounterOutOfBoundsException) { if (((CounterOutOfBoundsException) cause).isUpperBoundReached()) { System.out.println("ops, upper bound reached."); } else if (((CounterOutOfBoundsException) cause).isLowerBoundReached()) { System.out.println("ops, lower bound reached."); } } } // now using the functional API counter.addAndGet(-100).handle((v, throwable) -> { if (throwable != null) { Throwable cause = throwable.getCause(); if (cause instanceof CounterOutOfBoundsException) { if (((CounterOutOfBoundsException) cause).isUpperBoundReached()) { System.out.println("ops, upper bound reached."); } else if (((CounterOutOfBoundsException) cause).isLowerBoundReached()) { System.out.println("ops, lower bound reached."); } } return null; } System.out.println("new value is " + v); return null; }).get(); Compare-and-set vs Compare-and-swap examples: StrongCounter counter = counterManager.getStrongCounter("my-counter"); long oldValue, newValue; do { oldValue = counter.getValue().get(); newValue = someLogic(oldValue); } while (!counter.compareAndSet(oldValue, newValue).get()); With compare-and-swap, it saves one invocation counter invocation ( counter.getValue() ) StrongCounter counter = counterManager.getStrongCounter("my-counter"); long oldValue = counter.getValue().get(); long currentValue, newValue; do { currentValue = oldValue; newValue = someLogic(oldValue); } while ((oldValue = counter.compareAndSwap(oldValue, newValue).get()) != currentValue); To use a strong counter as a rate limiter, configure upper-bound and lifespan parameters as follows: // 5 request per minute CounterConfiguration configuration = CounterConfiguration.builder(CounterType.BOUNDED_STRONG) .upperBound(5) .lifespan(60000) .build(); counterManager.defineCounter("rate_limiter", configuration); StrongCounter counter = counterManager.getStrongCounter("rate_limiter"); // on each operation, invoke try { counter.incrementAndGet().get(); // continue with operation } catch (InterruptedException e) { Thread.currentThread().interrupt(); } catch (ExecutionException e) { if (e.getCause() instanceof CounterOutOfBoundsException) { // maximum rate. discard operation return; } else { // unexpected error, handling property } } Note The lifespan parameter is an experimental capability and may be removed in a future version. 11.1.3.2. The WeakCounter interface: when speed is needed The WeakCounter stores the counter's value in multiple keys in Data Grid cache. The number of keys created is configured by the concurrency-level attribute. Each key stores a partial state of the counter's value and it can be updated concurrently. It main advantage over the StrongCounter is the lower contention in the cache. On other hand, the read of its value is more expensive and bounds are not allowed. Warning The reset operation should be handled with caution. It is not atomic and it produces intermediates values. These value may be seen by a read operation and by any listener registered. A WeakCounter can be retrieved from the CounterManager by using the getWeakCounter() method. As an example: CounterManager counterManager = ... StrongCounter aCounter = counterManager.getWeakCounter("my-counter); 11.1.3.2.1. Weak Counter Interface The WeakCounter adds the following methods: default CompletableFuture<Void> increment() { return add(1L); } default CompletableFuture<Void> decrement() { return add(-1L); } CompletableFuture<Void> add(long delta); They are similar to the `StrongCounter's methods but they don't return the new value. 11.1.3.2.2. Uses cases The weak counter fits best in uses cases where the result of the update operation is not needed or the counter's value is not required too often. Collecting statistics is a good example of such an use case. 11.1.3.2.3. Examples Below, there is an example of the weak counter usage. WeakCounter counter = counterManager.getWeakCounter("my_counter"); // increment the counter and check its result counter.increment().get(); System.out.println("current value is " + counter.getValue()); CompletableFuture<Void> f = counter.add(-100); //do some work f.get(); //wait until finished System.out.println("current value is " + counter.getValue().get()); //using the functional API counter.reset().whenComplete((aVoid, throwable) -> System.out.println("Reset done " + (throwable == null ? "successfully" : "unsuccessfully"))).get(); System.out.println("current value is " + counter.getValue().get()); 11.1.4. Notifications and Events Both strong and weak counter supports a listener to receive its updates events. The listener must implement CounterListener and it can be registered by the following method: <T extends CounterListener> Handle<T> addListener(T listener); The CounterListener has the following interface: public interface CounterListener { void onUpdate(CounterEvent entry); } The Handle object returned has the main goal to remove the CounterListener when it is not longer needed. Also, it allows to have access to the CounterListener instance that is it handling. It has the following interface: public interface Handle<T extends CounterListener> { T getCounterListener(); void remove(); } Finally, the CounterEvent has the and current value and state. It has the following interface: public interface CounterEvent { long getOldValue(); State getOldState(); long getNewValue(); State getNewState(); } Note The state is always State.VALID for unbounded strong counter and weak counter. State.LOWER_BOUND_REACHED and State.UPPER_BOUND_REACHED are only valid for bounded strong counters. Warning The weak counter reset() operation will trigger multiple notification with intermediate values.
[ "<dependency> <groupId>org.infinispan</groupId> <artifactId>infinispan-clustered-counter</artifactId> </dependency>", "<infinispan> <cache-container ...> <!-- To persist counters, you need to configure the global state. --> <global-state> <!-- Global state configuration goes here. --> </global-state> <!-- Cache configuration goes here. --> <counters xmlns=\"urn:infinispan:config:counters:14.0\" num-owners=\"3\" reliability=\"CONSISTENT\"> <strong-counter name=\"c1\" initial-value=\"1\" storage=\"PERSISTENT\"/> <strong-counter name=\"c2\" initial-value=\"2\" storage=\"VOLATILE\" lower-bound=\"0\"/> <strong-counter name=\"c3\" initial-value=\"3\" storage=\"PERSISTENT\" upper-bound=\"5\"/> <strong-counter name=\"c4\" initial-value=\"4\" storage=\"VOLATILE\" lower-bound=\"0\" upper-bound=\"10\"/> <strong-counter name=\"c5\" initial-value=\"0\" upper-bound=\"100\" lifespan=\"60000\"/> <weak-counter name=\"c6\" initial-value=\"5\" storage=\"PERSISTENT\" concurrency-level=\"1\"/> </counters> </cache-container> </infinispan>", "GlobalConfigurationBuilder globalConfigurationBuilder = ...; CounterManagerConfigurationBuilder builder = globalConfigurationBuilder.addModule(CounterManagerConfigurationBuilder.class); builder.numOwner(3).reliability(Reliability.CONSISTENT); builder.addStrongCounter().name(\"c1\").initialValue(1).storage(Storage.PERSISTENT); builder.addStrongCounter().name(\"c2\").initialValue(2).lowerBound(0).storage(Storage.VOLATILE); builder.addStrongCounter().name(\"c3\").initialValue(3).upperBound(5).storage(Storage.PERSISTENT); builder.addStrongCounter().name(\"c4\").initialValue(4).lowerBound(0).upperBound(10).storage(Storage.VOLATILE); builder.addStrongCounter().name(\"c5\").initialValue(0).upperBound(100).lifespan(60000); builder.addWeakCounter().name(\"c6\").initialValue(5).concurrencyLevel(1).storage(Storage.PERSISTENT);", "CounterManager manager = ...; manager.defineCounter(\"c1\", CounterConfiguration.builder(CounterType.UNBOUNDED_STRONG).initialValue(1).storage(Storage.PERSISTENT).build()); manager.defineCounter(\"c2\", CounterConfiguration.builder(CounterType.BOUNDED_STRONG).initialValue(2).lowerBound(0).storage(Storage.VOLATILE).build()); manager.defineCounter(\"c3\", CounterConfiguration.builder(CounterType.BOUNDED_STRONG).initialValue(3).upperBound(5).storage(Storage.PERSISTENT).build()); manager.defineCounter(\"c4\", CounterConfiguration.builder(CounterType.BOUNDED_STRONG).initialValue(4).lowerBound(0).upperBound(10).storage(Storage.VOLATILE).build()); manager.defineCounter(\"c4\", CounterConfiguration.builder(CounterType.BOUNDED_STRONG).initialValue(0).upperBound(100).lifespan(60000).build()); manager.defineCounter(\"c6\", CounterConfiguration.builder(CounterType.WEAK).initialValue(5).concurrencyLevel(1).storage(Storage.PERSISTENT).build());", "CounterManager manager = if (!manager.isDefined(\"someCounter\")) { manager.define(\"someCounter\", ...); }", "// create or obtain your EmbeddedCacheManager EmbeddedCacheManager manager = ...; // retrieve the CounterManager CounterManager counterManager = EmbeddedCounterManagerFactory.asCounterManager(manager);", "// create or obtain your RemoteCacheManager RemoteCacheManager manager = ...; // retrieve the CounterManager CounterManager counterManager = RemoteCounterManagerFactory.asCounterManager(manager);", "String getName(); CompletableFuture<Long> getValue(); CompletableFuture<Void> reset(); <T extends CounterListener> Handle<T> addListener(T listener); CounterConfiguration getConfiguration(); CompletableFuture<Void> remove(); SyncStrongCounter sync(); //SyncWeakCounter for WeakCounter", "CounterManager counterManager = StrongCounter aCounter = counterManager.getStrongCounter(\"my-counter\");", "default CompletableFuture<Long> incrementAndGet() { return addAndGet(1L); } default CompletableFuture<Long> decrementAndGet() { return addAndGet(-1L); } CompletableFuture<Long> addAndGet(long delta); CompletableFuture<Boolean> compareAndSet(long expect, long update); CompletableFuture<Long> compareAndSwap(long expect, long update);", "public boolean isUpperBoundReached(); public boolean isLowerBoundReached();", "StrongCounter counter = counterManager.getStrongCounter(\"unbounded_counter\"); // incrementing the counter System.out.println(\"new value is \" + counter.incrementAndGet().get()); // decrement the counter's value by 100 using the functional API counter.addAndGet(-100).thenApply(v -> { System.out.println(\"new value is \" + v); return null; }).get(); // alternative, you can do some work while the counter is updated CompletableFuture<Long> f = counter.addAndGet(10); // ... do some work System.out.println(\"new value is \" + f.get()); // and then, check the current value System.out.println(\"current value is \" + counter.getValue().get()); // finally, reset to initial value counter.reset().get(); System.out.println(\"current value is \" + counter.getValue().get()); // or set to a new value if zero System.out.println(\"compare and set succeeded? \" + counter.compareAndSet(0, 1));", "StrongCounter counter = counterManager.getStrongCounter(\"bounded_counter\"); // incrementing the counter try { System.out.println(\"new value is \" + counter.addAndGet(100).get()); } catch (ExecutionException e) { Throwable cause = e.getCause(); if (cause instanceof CounterOutOfBoundsException) { if (((CounterOutOfBoundsException) cause).isUpperBoundReached()) { System.out.println(\"ops, upper bound reached.\"); } else if (((CounterOutOfBoundsException) cause).isLowerBoundReached()) { System.out.println(\"ops, lower bound reached.\"); } } } // now using the functional API counter.addAndGet(-100).handle((v, throwable) -> { if (throwable != null) { Throwable cause = throwable.getCause(); if (cause instanceof CounterOutOfBoundsException) { if (((CounterOutOfBoundsException) cause).isUpperBoundReached()) { System.out.println(\"ops, upper bound reached.\"); } else if (((CounterOutOfBoundsException) cause).isLowerBoundReached()) { System.out.println(\"ops, lower bound reached.\"); } } return null; } System.out.println(\"new value is \" + v); return null; }).get();", "StrongCounter counter = counterManager.getStrongCounter(\"my-counter\"); long oldValue, newValue; do { oldValue = counter.getValue().get(); newValue = someLogic(oldValue); } while (!counter.compareAndSet(oldValue, newValue).get());", "StrongCounter counter = counterManager.getStrongCounter(\"my-counter\"); long oldValue = counter.getValue().get(); long currentValue, newValue; do { currentValue = oldValue; newValue = someLogic(oldValue); } while ((oldValue = counter.compareAndSwap(oldValue, newValue).get()) != currentValue);", "// 5 request per minute CounterConfiguration configuration = CounterConfiguration.builder(CounterType.BOUNDED_STRONG) .upperBound(5) .lifespan(60000) .build(); counterManager.defineCounter(\"rate_limiter\", configuration); StrongCounter counter = counterManager.getStrongCounter(\"rate_limiter\"); // on each operation, invoke try { counter.incrementAndGet().get(); // continue with operation } catch (InterruptedException e) { Thread.currentThread().interrupt(); } catch (ExecutionException e) { if (e.getCause() instanceof CounterOutOfBoundsException) { // maximum rate. discard operation return; } else { // unexpected error, handling property } }", "CounterManager counterManager = StrongCounter aCounter = counterManager.getWeakCounter(\"my-counter);", "default CompletableFuture<Void> increment() { return add(1L); } default CompletableFuture<Void> decrement() { return add(-1L); } CompletableFuture<Void> add(long delta);", "WeakCounter counter = counterManager.getWeakCounter(\"my_counter\"); // increment the counter and check its result counter.increment().get(); System.out.println(\"current value is \" + counter.getValue()); CompletableFuture<Void> f = counter.add(-100); //do some work f.get(); //wait until finished System.out.println(\"current value is \" + counter.getValue().get()); //using the functional API counter.reset().whenComplete((aVoid, throwable) -> System.out.println(\"Reset done \" + (throwable == null ? \"successfully\" : \"unsuccessfully\"))).get(); System.out.println(\"current value is \" + counter.getValue().get());", "<T extends CounterListener> Handle<T> addListener(T listener);", "public interface CounterListener { void onUpdate(CounterEvent entry); }", "public interface Handle<T extends CounterListener> { T getCounterListener(); void remove(); }", "public interface CounterEvent { long getOldValue(); State getOldState(); long getNewValue(); State getNewState(); }" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/configuring_data_grid_caches/clustered-counters
Chapter 3. Installing a cluster quickly on Alibaba Cloud
Chapter 3. Installing a cluster quickly on Alibaba Cloud In OpenShift Container Platform version 4.14, you can install a cluster on Alibaba Cloud that uses the default configuration options. Important Alibaba Cloud on OpenShift Container Platform is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 3.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You registered your domain . If you use a firewall, you configured it to allow the sites that your cluster requires access to. You have created the required Alibaba Cloud resources . If the cloud Resource Access Management (RAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain Resource Access Management (RAM) credentials . 3.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.14, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 3.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 3.4. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 3.5. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Alibaba Cloud. Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Note Always delete the ~/.powervs directory to avoid reusing a stale configuration. Run the following command: USD rm -rf ~/.powervs At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select alibabacloud as the platform to target. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. Provide a descriptive name for your cluster. Installing the cluster into Alibaba Cloud requires that the Cloud Credential Operator (CCO) operate in manual mode. Modify the install-config.yaml file to set the credentialsMode parameter to Manual : Example install-config.yaml configuration file with credentialsMode set to Manual apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled ... 1 Add this line to set the credentialsMode to Manual . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 3.6. Generating the required installation manifests You must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. Procedure Generate the manifests by running the following command from the directory that contains the installation program: USD openshift-install create manifests --dir <installation_directory> where: <installation_directory> Specifies the directory in which the installation program creates files. 3.7. Creating credentials for OpenShift Container Platform components with the ccoctl tool You can use the OpenShift Container Platform Cloud Credential Operator (CCO) utility to automate the creation of Alibaba Cloud RAM users and policies for each in-cluster component. Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Created a RAM user with sufficient permission to create the OpenShift Container Platform cluster. Added the AccessKeyID ( access_key_id ) and AccessKeySecret ( access_key_secret ) of that RAM user into the ~/.alibabacloud/credentials file on your local computer. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Note This command might take a few moments to run. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: Run the following command to use the tool: USD ccoctl alibabacloud create-ram-users \ --name <name> \ 1 --region=<alibaba_region> \ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \ 3 --output-dir=<path_to_ccoctl_output_dir> 4 1 Specify the name used to tag any cloud resources that are created for tracking. 2 Specify the Alibaba Cloud region in which cloud resources will be created. 3 Specify the directory containing the files for the component CredentialsRequest objects. 4 Specify the directory where the generated component credentials secrets will be placed. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. Example output 2022/02/11 16:18:26 Created RAM User: user1-alicloud-openshift-machine-api-alibabacloud-credentials 2022/02/11 16:18:27 Ready for creating new ram policy user1-alicloud-openshift-machine-api-alibabacloud-credentials-policy-policy 2022/02/11 16:18:27 RAM policy user1-alicloud-openshift-machine-api-alibabacloud-credentials-policy-policy has created 2022/02/11 16:18:28 Policy user1-alicloud-openshift-machine-api-alibabacloud-credentials-policy-policy has attached on user user1-alicloud-openshift-machine-api-alibabacloud-credentials 2022/02/11 16:18:29 Created access keys for RAM User: user1-alicloud-openshift-machine-api-alibabacloud-credentials 2022/02/11 16:18:29 Saved credentials configuration to: user1-alicloud/manifests/openshift-machine-api-alibabacloud-credentials-credentials.yaml ... Note A RAM user can have up to two AccessKeys at the same time. If you run ccoctl alibabacloud create-ram-users more than twice, the previously generated manifests secret becomes stale and you must reapply the newly generated secrets. Verify that the OpenShift Container Platform secrets are created: USD ls <path_to_ccoctl_output_dir>/manifests Example output openshift-cluster-csi-drivers-alibaba-disk-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-alibabacloud-credentials-credentials.yaml You can verify that the RAM users and policies are created by querying Alibaba Cloud. For more information, refer to Alibaba Cloud documentation on listing RAM users and policies. Copy the generated credential files to the target manifests directory: USD cp ./<path_to_ccoctl_output_dir>/manifests/*credentials.yaml ./<path_to_installation>dir>/manifests/ where: <path_to_ccoctl_output_dir> Specifies the directory created by the ccoctl alibabacloud create-ram-users command. <path_to_installation_dir> Specifies the directory in which the installation program creates files. 3.8. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 3.9. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.14. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.14 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 3.10. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin /validating-an-installation.adoc 3.11. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. 3.12. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.14, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. See About remote health monitoring for more information about the Telemetry service 3.13. steps Validating an installation . Customize your cluster . If necessary, you can opt out of remote health reporting .
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create install-config --dir <installation_directory> 1", "rm -rf ~/.powervs", "apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled", "openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "ccoctl alibabacloud create-ram-users --name <name> \\ 1 --region=<alibaba_region> \\ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 3 --output-dir=<path_to_ccoctl_output_dir> 4", "2022/02/11 16:18:26 Created RAM User: user1-alicloud-openshift-machine-api-alibabacloud-credentials 2022/02/11 16:18:27 Ready for creating new ram policy user1-alicloud-openshift-machine-api-alibabacloud-credentials-policy-policy 2022/02/11 16:18:27 RAM policy user1-alicloud-openshift-machine-api-alibabacloud-credentials-policy-policy has created 2022/02/11 16:18:28 Policy user1-alicloud-openshift-machine-api-alibabacloud-credentials-policy-policy has attached on user user1-alicloud-openshift-machine-api-alibabacloud-credentials 2022/02/11 16:18:29 Created access keys for RAM User: user1-alicloud-openshift-machine-api-alibabacloud-credentials 2022/02/11 16:18:29 Saved credentials configuration to: user1-alicloud/manifests/openshift-machine-api-alibabacloud-credentials-credentials.yaml", "ls <path_to_ccoctl_output_dir>/manifests", "openshift-cluster-csi-drivers-alibaba-disk-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-alibabacloud-credentials-credentials.yaml", "cp ./<path_to_ccoctl_output_dir>/manifests/*credentials.yaml ./<path_to_installation>dir>/manifests/", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "cat <installation_directory>/auth/kubeadmin-password", "oc get routes -n openshift-console | grep 'console-openshift'", "console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_on_alibaba/installing-alibaba-default
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/replacing_nodes/making-open-source-more-inclusive
Chapter 175. JGroups Component
Chapter 175. JGroups Component Available as of Camel version 2.13 JGroups is a toolkit for reliable multicast communication. The jgroups: component provides exchange of messages between Camel infrastructure and JGroups clusters. Maven users will need to add the following dependency to their pom.xml for this component. <dependency> <groupId>org.apache-extras.camel-extra</groupId> <artifactId>camel-jgroups</artifactId> <!-- use the same version as your Camel core version --> <version>x.y.z</version> </dependency> Starting from the Camel 2.13.0 , JGroups component has been moved from Camel Extra under the umbrella of the Apache Camel. If you are using Camel 2.13.0 or higher, please use the following POM entry instead. <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-jgroups</artifactId> <!-- use the same version as your Camel core version --> <version>x.y.z</version> </dependency> 175.1. URI format jgroups:clusterName[?options] Where clusterName represents the name of the JGroups cluster the component should connect to. 175.2. Options The JGroups component supports 4 options, which are listed below. Name Description Default Type channel (common) Channel to use JChannel channelProperties (common) Specifies configuration properties of the JChannel used by the endpoint. String enableViewMessages (consumer) If set to true, the consumer endpoint will receive org.jgroups.View messages as well (not only org.jgroups.Message instances). By default only regular messages are consumed by the endpoint. false boolean resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The JGroups endpoint is configured using URI syntax: with the following path and query parameters: 175.2.1. Path Parameters (1 parameters): Name Description Default Type clusterName Required The name of the JGroups cluster the component should connect to. String 175.2.2. Query Parameters (6 parameters): Name Description Default Type channelProperties (common) Specifies configuration properties of the JChannel used by the endpoint. String bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean enableViewMessages (consumer) If set to true, the consumer endpoint will receive org.jgroups.View messages as well (not only org.jgroups.Message instances). By default only regular messages are consumed by the endpoint. false boolean exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean 175.3. Spring Boot Auto-Configuration The component supports 9 options, which are listed below. Name Description Default Type camel.component.jgroups.channel Channel to use. The option is a org.jgroups.JChannel type. String camel.component.jgroups.channel-properties Specifies configuration properties of the JChannel used by the endpoint. String camel.component.jgroups.enable-view-messages If set to true, the consumer endpoint will receive org.jgroups.View messages as well (not only org.jgroups.Message instances). By default only regular messages are consumed by the endpoint. false Boolean camel.component.jgroups.enabled Enable jgroups component true Boolean camel.component.jgroups.lock.cluster.service.enabled Sets if the jgroups lock cluster service should be enabled or not, default is false. false Boolean camel.component.jgroups.lock.cluster.service.id Cluster Service ID String camel.component.jgroups.lock.cluster.service.jgroups-cluster-name JGroups Cluster name String camel.component.jgroups.lock.cluster.service.jgroups-config JGrups configuration File name String camel.component.jgroups.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean 175.4. Headers Header Constant Since version Description JGROUPS_ORIGINAL_MESSAGE JGroupsEndpoint.HEADER_JGROUPS_ORIGINAL_MESSAGE 2.13.0 The original org.jgroups.Message instance from which the body of the consumed message has been extracted. JGROUPS_SRC `JGroupsEndpoint.`HEADER_JGROUPS_SRC 2.10.0 Consumer : The org.jgroups.Address instance extracted by org.jgroups.Message .getSrc() method of the consumed message. Producer : The custom source org.jgroups.Address of the message to be sent. JGROUPS_DEST `JGroupsEndpoint.`HEADER_JGROUPS_DEST 2.10.0 Consumer : The org.jgroups.Address instance extracted by org.jgroups.Message .getDest() method of the consumed message. Producer : The custom destination org.jgroups.Address of the message to be sent. JGROUPS_CHANNEL_ADDRESS `JGroupsEndpoint.`HEADER_JGROUPS_CHANNEL_ADDRESS 2.13.0 Address ( org.jgroups.Address ) of the channel associated with the endpoint. 175.5. Usage Using jgroups component on the consumer side of the route will capture messages received by the JChannel associated with the endpoint and forward them to the Camel route. JGroups consumer processes incoming messages asynchronously . // Capture messages from cluster named // 'clusterName' and send them to Camel route. from("jgroups:clusterName").to("seda:queue"); Using jgroups component on the producer side of the route will forward body of the Camel exchanges to the JChannel instance managed by the endpoint. // Send message to the cluster named 'clusterName' from("direct:start").to("jgroups:clusterName"); 175.6. Predefined filters Starting from version 2.13.0 of Camel, JGroups component comes with predefined filters factory class named JGroupsFilters. If you would like to consume only view changes notifications sent to coordinator of the cluster (and ignore these sent to the "slave" nodes), use the JGroupsFilters.dropNonCoordinatorViews() filter. This filter is particularly useful when you want a single Camel node to become the master in the cluster, because messages passing this filter notifies you when given node has become a coordinator of the cluster. The snippet below demonstrates how to collect only messages received by the master node. import static org.apache.camel.component.jgroups.JGroupsFilters.dropNonCoordinatorViews; ... from("jgroups:clusterName?enableViewMessages=true"). filter(dropNonCoordinatorViews()). to("seda:masterNodeEventsQueue"); 175.7. Predefined expressions Starting from version 2.13.0 of Camel, JGroups component comes with predefined expressions factory class named JGroupsExpressions. If you would like to create delayer that would affect the route only if the Camel context has not been started yet, use the JGroupsExpressions.delayIfContextNotStarted(long delay) factory method. The expression created by this factory method will return given delay value only if the Camel context is in the state different than started . This expression is particularly useful if you would like to use JGroups component for keeping singleton (master) route within the cluster. Control Bus start command won't initialize the singleton route if the Camel Context hasn't been yet started. So you need to delay a startup of the master route, to be sure that it has been initialized after the Camel Context startup. Because such scenario can happen only during the initialization of the cluster, we don't want to delay startup of the slave node becoming the new master - that's why we need a conditional delay expression. The snippet below demonstrates how to use conditional delaying with the JGroups component to delay the initial startup of master node in the cluster. import static java.util.concurrent.TimeUnit.SECONDS; import static org.apache.camel.component.jgroups.JGroupsExpressions.delayIfContextNotStarted; import static org.apache.camel.component.jgroups.JGroupsFilters.dropNonCoordinatorViews; ... from("jgroups:clusterName?enableViewMessages=true"). filter(dropNonCoordinatorViews()). threads().delay(delayIfContextNotStarted(SECONDS.toMillis(5))). // run in separated and delayed thread. Delay only if the context hasn't been started already. to("controlbus:route?routeId=masterRoute&action=start&async=true"); from("timer://master?repeatCount=1").routeId("masterRoute").autoStartup(false).to(masterMockUri); 175.8. Examples 175.8.1. Sending (receiving) messages to (from) the JGroups cluster In order to send message to the JGroups cluster use producer endpoint, just as demonstrated on the snippet below. from("direct:start").to("jgroups:myCluster"); ... producerTemplate.sendBody("direct:start", "msg") To receive the message from the snippet above (on the same or the other physical machine) listen on the messages coming from the given cluster, just as demonstrated on the code fragment below. mockEndpoint.setExpectedMessageCount(1); mockEndpoint.message(0).body().isEqualTo("msg"); ... from("jgroups:myCluster").to("mock:messagesFromTheCluster"); ... mockEndpoint.assertIsSatisfied(); 175.8.2. Receive cluster view change notifications The snippet below demonstrates how to create the consumer endpoint listening to the notifications regarding cluster membership changes. By default only regular messages are consumed by the endpoint. mockEndpoint.setExpectedMessageCount(1); mockEndpoint.message(0).body().isInstanceOf(org.jgroups.View.class); ... from("jgroups:clusterName?enableViewMessages=true").to(mockEndpoint); ... mockEndpoint.assertIsSatisfied(); 175.8.3. Keeping singleton route within the cluster The snippet below demonstrates how to keep the singleton consumer route in the cluster of Camel Contexts. As soon as the master node dies, one of the slaves will be elected as a new master and started. In this particular example we want to keep singleton jetty instance listening for the requests on address` http://localhost:8080/orders` . JGroupsLockClusterService service = new JGroupsLockClusterService(); service.setId("uniqueNodeId"); ... context.addService(service); from("master:mycluster:jetty:http://localhost:8080/orders").to("jms:orders");
[ "<dependency> <groupId>org.apache-extras.camel-extra</groupId> <artifactId>camel-jgroups</artifactId> <!-- use the same version as your Camel core version --> <version>x.y.z</version> </dependency>", "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-jgroups</artifactId> <!-- use the same version as your Camel core version --> <version>x.y.z</version> </dependency>", "jgroups:clusterName[?options]", "jgroups:clusterName", "// Capture messages from cluster named // 'clusterName' and send them to Camel route. from(\"jgroups:clusterName\").to(\"seda:queue\");", "// Send message to the cluster named 'clusterName' from(\"direct:start\").to(\"jgroups:clusterName\");", "import static org.apache.camel.component.jgroups.JGroupsFilters.dropNonCoordinatorViews; from(\"jgroups:clusterName?enableViewMessages=true\"). filter(dropNonCoordinatorViews()). to(\"seda:masterNodeEventsQueue\");", "import static java.util.concurrent.TimeUnit.SECONDS; import static org.apache.camel.component.jgroups.JGroupsExpressions.delayIfContextNotStarted; import static org.apache.camel.component.jgroups.JGroupsFilters.dropNonCoordinatorViews; from(\"jgroups:clusterName?enableViewMessages=true\"). filter(dropNonCoordinatorViews()). threads().delay(delayIfContextNotStarted(SECONDS.toMillis(5))). // run in separated and delayed thread. Delay only if the context hasn't been started already. to(\"controlbus:route?routeId=masterRoute&action=start&async=true\"); from(\"timer://master?repeatCount=1\").routeId(\"masterRoute\").autoStartup(false).to(masterMockUri);", "from(\"direct:start\").to(\"jgroups:myCluster\"); producerTemplate.sendBody(\"direct:start\", \"msg\")", "mockEndpoint.setExpectedMessageCount(1); mockEndpoint.message(0).body().isEqualTo(\"msg\"); from(\"jgroups:myCluster\").to(\"mock:messagesFromTheCluster\"); mockEndpoint.assertIsSatisfied();", "mockEndpoint.setExpectedMessageCount(1); mockEndpoint.message(0).body().isInstanceOf(org.jgroups.View.class); from(\"jgroups:clusterName?enableViewMessages=true\").to(mockEndpoint); mockEndpoint.assertIsSatisfied();", "JGroupsLockClusterService service = new JGroupsLockClusterService(); service.setId(\"uniqueNodeId\"); context.addService(service); from(\"master:mycluster:jetty:http://localhost:8080/orders\").to(\"jms:orders\");" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/jgroups-component
24.6.3. Configuring Net-SNMP
24.6.3. Configuring Net-SNMP To change the Net-SNMP Agent Daemon configuration, edit the /etc/snmp/snmpd.conf configuration file. The default snmpd.conf file shipped with Red Hat Enterprise Linux 6 is heavily commented and serves as a good starting point for agent configuration. This section focuses on two common tasks: setting system information and configuring authentication. For more information about available configuration directives, see the snmpd.conf (5) manual page. Additionally, there is a utility in the net-snmp package named snmpconf which can be used to interactively generate a valid agent configuration. Note that the net-snmp-utils package must be installed in order to use the snmpwalk utility described in this section. Note For any changes to the configuration file to take effect, force the snmpd service to re-read the configuration by running the following command as root : service snmpd reload 24.6.3.1. Setting System Information Net-SNMP provides some rudimentary system information via the system tree. For example, the following snmpwalk command shows the system tree with a default agent configuration. By default, the sysName object is set to the host name. The sysLocation and sysContact objects can be configured in the /etc/snmp/snmpd.conf file by changing the value of the syslocation and syscontact directives, for example: After making changes to the configuration file, reload the configuration and test it by running the snmpwalk command again: 24.6.3.2. Configuring Authentication The Net-SNMP Agent Daemon supports all three versions of the SNMP protocol. The first two versions (1 and 2c) provide for simple authentication using a community string . This string is a shared secret between the agent and any client utilities. The string is passed in clear text over the network however and is not considered secure. Version 3 of the SNMP protocol supports user authentication and message encryption using a variety of protocols. The Net-SNMP agent also supports tunneling over SSH, TLS authentication with X.509 certificates, and Kerberos authentication. Configuring SNMP Version 2c Community To configure an SNMP version 2c community , use either the rocommunity or rwcommunity directive in the /etc/snmp/snmpd.conf configuration file. The format of the directives is the following: directive community [ source [ OID ] ] where community is the community string to use, source is an IP address or subnet, and OID is the SNMP tree to provide access to. For example, the following directive provides read-only access to the system tree to a client using the community string " redhat " on the local machine: To test the configuration, use the snmpwalk command with the -v and -c options. Configuring SNMP Version 3 User To configure an SNMP version 3 user , use the net-snmp-create-v3-user command. This command adds entries to the /var/lib/net-snmp/snmpd.conf and /etc/snmp/snmpd.conf files which create the user and grant access to the user. Note that the net-snmp-create-v3-user command may only be run when the agent is not running. The following example creates the " admin " user with the password " redhatsnmp " : The rwuser directive (or rouser when the -ro command-line option is supplied) that net-snmp-create-v3-user adds to /etc/snmp/snmpd.conf has a similar format to the rwcommunity and rocommunity directives: directive user [ noauth | auth | priv ] [ OID ] where user is a user name and OID is the SNMP tree to provide access to. By default, the Net-SNMP Agent Daemon allows only authenticated requests (the auth option). The noauth option allows you to permit unauthenticated requests, and the priv option enforces the use of encryption. The authpriv option specifies that requests must be authenticated and replies should be encrypted. For example, the following line grants the user " admin " read-write access to the entire tree: To test the configuration, create a .snmp directory in your user's home directory and a configuration file named snmp.conf in that directory ( ~/.snmp/snmp.conf ) with the following lines: The snmpwalk command will now use these authentication settings when querying the agent:
[ "~]# snmpwalk -v2c -c public localhost system SNMPv2-MIB::sysDescr.0 = STRING: Linux localhost.localdomain 2.6.32-122.el6.x86_64 #1 SMP Wed Mar 9 23:54:34 EST 2011 x86_64 SNMPv2-MIB::sysObjectID.0 = OID: NET-SNMP-MIB::netSnmpAgentOIDs.10 DISMAN-EVENT-MIB::sysUpTimeInstance = Timeticks: (99554) 0:16:35.54 SNMPv2-MIB::sysContact.0 = STRING: Root <root@localhost> (configure /etc/snmp/snmp.local.conf) SNMPv2-MIB::sysName.0 = STRING: localhost.localdomain SNMPv2-MIB::sysLocation.0 = STRING: Unknown (edit /etc/snmp/snmpd.conf)", "syslocation Datacenter, Row 3, Rack 2 syscontact UNIX Admin <[email protected]>", "~]# service snmpd reload Reloading snmpd: [ OK ] ~]# snmpwalk -v2c -c public localhost system SNMPv2-MIB::sysDescr.0 = STRING: Linux localhost.localdomain 2.6.32-122.el6.x86_64 #1 SMP Wed Mar 9 23:54:34 EST 2011 x86_64 SNMPv2-MIB::sysObjectID.0 = OID: NET-SNMP-MIB::netSnmpAgentOIDs.10 DISMAN-EVENT-MIB::sysUpTimeInstance = Timeticks: (158357) 0:26:23.57 SNMPv2-MIB::sysContact.0 = STRING: UNIX Admin <[email protected]> SNMPv2-MIB::sysName.0 = STRING: localhost.localdomain SNMPv2-MIB::sysLocation.0 = STRING: Datacenter, Row 3, Rack 2", "rocommunity redhat 127.0.0.1 .1.3.6.1.2.1.1", "~]# snmpwalk -v2c -c redhat localhost system SNMPv2-MIB::sysDescr.0 = STRING: Linux localhost.localdomain 2.6.32-122.el6.x86_64 #1 SMP Wed Mar 9 23:54:34 EST 2011 x86_64 SNMPv2-MIB::sysObjectID.0 = OID: NET-SNMP-MIB::netSnmpAgentOIDs.10 DISMAN-EVENT-MIB::sysUpTimeInstance = Timeticks: (158357) 0:26:23.57 SNMPv2-MIB::sysContact.0 = STRING: UNIX Admin <[email protected]> SNMPv2-MIB::sysName.0 = STRING: localhost.localdomain SNMPv2-MIB::sysLocation.0 = STRING: Datacenter, Row 3, Rack 2", "~]# service snmpd stop Stopping snmpd: [ OK ] ~]# net-snmp-create-v3-user Enter a SNMPv3 user name to create: admin Enter authentication pass-phrase: redhatsnmp Enter encryption pass-phrase: [press return to reuse the authentication pass-phrase] adding the following line to /var/lib/net-snmp/snmpd.conf: createUser admin MD5 \"redhatsnmp\" DES adding the following line to /etc/snmp/snmpd.conf: rwuser admin ~]# service snmpd start Starting snmpd: [ OK ]", "rwuser admin authpriv .1", "defVersion 3 defSecurityLevel authPriv defSecurityName admin defPassphrase redhatsnmp", "~]USD snmpwalk -v3 localhost system SNMPv2-MIB::sysDescr.0 = STRING: Linux localhost.localdomain 2.6.32-122.el6.x86_64 #1 SMP Wed Mar 9 23:54:34 EST 2011 x86_64 [output truncated]" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sect-system_monitoring_tools-net-snmp-configuring
Chapter 4. Cloning Virtual Machines
Chapter 4. Cloning Virtual Machines There are two types of guest virtual machine instances used in creating guest copies: Clones are instances of a single virtual machine. Clones can be used to set up a network of identical virtual machines, and they can also be distributed to other destinations. Templates are instances of a virtual machine that are designed to be used as a source for cloning. You can create multiple clones from a template and make minor modifications to each clone. This is useful in seeing the effects of these changes on the system. Both clones and templates are virtual machine instances. The difference between them is in how they are used. For the created clone to work properly, information and configurations unique to the virtual machine that is being cloned usually has to be removed before cloning. The information that needs to be removed differs, based on how the clones will be used. The information and configurations to be removed may be on any of the following levels: Platform level information and configurations include anything assigned to the virtual machine by the virtualization solution. Examples include the number of Network Interface Cards (NICs) and their MAC addresses. Guest operating system level information and configurations include anything configured within the virtual machine. Examples include SSH keys. Application level information and configurations include anything configured by an application installed on the virtual machine. Examples include activation codes and registration information. Note This chapter does not include information about removing the application level, because the information and approach is specific to each application. As a result, some of the information and configurations must be removed from within the virtual machine, while other information and configurations must be removed from the virtual machine using the virtualization environment (for example, Virtual Machine Manager or VMware). Note For information on cloning storage volumes, see Section 13.3.2.1, "Creating Storage Volumes with virsh" . 4.1. Preparing Virtual Machines for Cloning Before cloning a virtual machine, it must be prepared by running the virt-sysprep utility on its disk image, or by using the following steps: Procedure 4.1. Preparing a virtual machine for cloning Setup the virtual machine Build the virtual machine that is to be used for the clone or template. Install any software needed on the clone. Configure any non-unique settings for the operating system. Configure any non-unique application settings. Remove the network configuration Remove any persistent udev rules using the following command: Note If udev rules are not removed, the name of the first NIC may be eth1 instead of eth0. Remove unique network details from ifcfg scripts by making the following edits to /etc/sysconfig/network-scripts/ifcfg-eth[x] : Remove the HWADDR and Static lines Note If the HWADDR does not match the new guest's MAC address, the ifcfg will be ignored. Therefore, it is important to remove the HWADDR from the file. Ensure that a DHCP configuration remains that does not include a HWADDR or any unique information. Ensure that the file includes the following lines: If the following files exist, ensure that they contain the same content: /etc/sysconfig/networking/devices/ifcfg-eth[x] /etc/sysconfig/networking/profiles/default/ifcfg-eth[x] Note If NetworkManager or any special settings were used with the virtual machine, ensure that any additional unique information is removed from the ifcfg scripts. Remove registration details Remove registration details using one of the following: For Red Hat Network (RHN) registered guest virtual machines, use the following command: For Red Hat Subscription Manager (RHSM) registered guest virtual machines: If the original virtual machine will not be used, use the following commands: If the original virtual machine will be used, run only the following command: The original RHSM profile remains in the Portal. To reactivate your RHSM registration on the virtual machine after it is cloned, do the following: Obtain your customer identity code: Register the virtual machine using the obtained ID code: Removing other unique details Remove any sshd public/private key pairs using the following command: Note Removing ssh keys prevents problems with ssh clients not trusting these hosts. Remove any other application-specific identifiers or configurations that may cause conflicts if running on multiple machines. Configure the virtual machine to run configuration wizards on the boot Configure the virtual machine to run the relevant configuration wizards the time it is booted by doing one of the following: For Red Hat Enterprise Linux 6 and below, create an empty file on the root file system called .unconfigured using the following command: For Red Hat Enterprise Linux 7, enable the first boot and initial-setup wizards by running the following commands: Note The wizards that run on the boot depend on the configurations that have been removed from the virtual machine. In addition, on the first boot of the clone, it is recommended that you change the hostname.
[ "rm -f /etc/udev/rules.d/70-persistent-net.rules", "DEVICE=eth[x] BOOTPROTO=none ONBOOT=yes #NETWORK=10.0.1.0 <- REMOVE #NETMASK=255.255.255.0 <- REMOVE #IPADDR=10.0.1.20 <- REMOVE #HWADDR=xx:xx:xx:xx:xx <- REMOVE #USERCTL=no <- REMOVE Remove any other *unique* or non-desired settings, such as UUID.", "DEVICE=eth[x] BOOTPROTO=dhcp ONBOOT=yes", "DEVICE=eth[x] ONBOOT=yes", "rm /etc/sysconfig/rhn/systemid", "subscription-manager unsubscribe --all subscription-manager unregister subscription-manager clean", "subscription-manager clean", "subscription-manager identity subscription-manager identity: 71rd64fx-6216-4409-bf3a-e4b7c7bd8ac9", "subscription-manager register --consumerid=71rd64fx-6216-4409-bf3a-e4b7c7bd8ac9", "rm -rf /etc/ssh/ssh_host_*", "touch /.unconfigured", "sed -ie 's/RUN_FIRSTBOOT=NO/RUN_FIRSTBOOT=YES/' /etc/sysconfig/firstboot systemctl enable firstboot-graphical systemctl enable initial-setup-graphical" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/cloning_virtual_machines
Chapter 37. Networking
Chapter 37. Networking rsync component, BZ# 1082496 The rsync utility cannot be run as a socket-activated service because the [email protected] file is missing from the rsync package. Consequently, the systemctl start rsyncd.socket command does not work. However, running rsync as a daemon by executing the systemctl start rsyncd.service command works as expected. InfiniBand component, BZ#1172783 The libocrdma package is not included in the default package set of the InfiniBand Support group. Consequently, when users select the InfiniBand Support group and are expecting RDMA over Converged Ethernet (RoCE) to work on Emulex OneConnect adapters, the necessary driver, libocrdma , is not installed by default. On first boot, the user can manually install the missing package by issuing this command: As a result, the user will now be able to use the Emulex OneConnect devices in RoCE mode. vsftpd component, BZ# 1058712 The vsftpd daemon does not currently support ciphers suites based on the Elliptic Curve Diffie-Hellman Exchange (ECDHE) key-exchange protocol. Consequently, when vsftpd is configured to use such suites, the connection is refused with a no shared cipher SSL alert. arptables component, BZ#1018135 Red Hat Enterprise Linux 7 introduces the arptables packages, which replace the arptables_jf packages included in Red Hat Enterprise Linux 6. All users of arptables are advised to update their scripts because the syntax of this version differs from arptables_jf. openssl component, BZ#1062656 It is not possible to connect to any Wi-Fi Protected Access (WPA) Enterprise Access Point (AP) that requires MD5-signed certificates. To work around this problem, copy the wpa_supplicant.service file from the /usr/lib/systemd/system/ directory to the /etc/systemd/system/ directory and add the following line to the Service section of the file: Then run the systemctl daemon-reload command as root to reload the service file. Important Note that MD5 certificates are highly insecure and Red Hat does not recommend using them.
[ "~]# yum install libocrdma", "Environment=OPENSSL_ENABLE_MD5_VERIFY=1" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.1_release_notes/known-issues-networking
Chapter 12. Configuring firewalld by using RHEL system roles
Chapter 12. Configuring firewalld by using RHEL system roles RHEL system roles is a set of contents for the Ansible automation utility. This content together with the Ansible automation utility provides a consistent configuration interface to remotely manage multiple systems at once. The rhel-system-roles package contains the rhel-system-roles.firewall RHEL system role. This role was introduced for automated configurations of the firewalld service. With the firewall RHEL system role you can configure many different firewalld parameters, for example: Zones The services for which packets should be allowed Granting, rejection, or dropping of traffic access to ports Forwarding of ports or port ranges for a zone 12.1. Resetting the firewalld settings by using the firewall RHEL system role Over time, updates to your firewall configuration can accumulate to the point, where they could lead to unintended security risks. With the firewall RHEL system role, you can reset the firewalld settings to their default state in an automated fashion. This way you can efficiently remove any unintentional or insecure firewall rules and simplify their management. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Reset firewalld example hosts: managed-node-01.example.com tasks: - name: Reset firewalld ansible.builtin.include_role: name: rhel-system-roles.firewall vars: firewall: - : replaced The settings specified in the example playbook include the following: : replaced Removes all existing user-defined settings and resets the firewalld settings to defaults. If you combine the :replaced parameter with other settings, the firewall role removes all existing settings before applying new ones. For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.firewall/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification Run this command on the control node to remotely check that all firewall configuration on your managed node was reset to its default values: Additional resources /usr/share/ansible/roles/rhel-system-roles.firewall/README.md file /usr/share/doc/rhel-system-roles/firewall/ directory 12.2. Forwarding incoming traffic in firewalld from one local port to a different local port by using the firewall RHEL system role You can use the firewall RHEL system role to remotely configure forwarding of incoming traffic from one local port to a different local port. For example, if you have an environment where multiple services co-exist on the same machine and need the same default port, there are likely to become port conflicts. These conflicts can disrupt services and cause a downtime. With the firewall RHEL system role, you can efficiently forward traffic to alternative ports to ensure that your services can run simultaneously without modification to their configuration. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Configure firewalld hosts: managed-node-01.example.com tasks: - name: Forward incoming traffic on port 8080 to 443 ansible.builtin.include_role: name: rhel-system-roles.firewall vars: firewall: - forward_port: 8080/tcp;443; state: enabled runtime: true permanent: true The settings specified in the example playbook include the following: forward_port: 8080/tcp;443 Traffic coming to the local port 8080 using the TCP protocol is forwarded to the port 443. runtime: true Enables changes in the runtime configuration. The default is set to true . For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.firewall/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification On the control node, run the following command to remotely check the forwarded-ports on your managed node: Additional resources /usr/share/ansible/roles/rhel-system-roles.firewall/README.md file /usr/share/doc/rhel-system-roles/firewall/ directory 12.3. Configuring a firewalld DMZ zone by using the firewall RHEL system role As a system administrator, you can use the firewall RHEL system role to configure a dmz zone on the enp1s0 interface to permit HTTPS traffic to the zone. In this way, you enable external users to access your web servers. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Configure firewalld hosts: managed-node-01.example.com tasks: - name: Creating a DMZ with access to HTTPS port and masquerading for hosts in DMZ ansible.builtin.include_role: name: rhel-system-roles.firewall vars: firewall: - zone: dmz interface: enp1s0 service: https state: enabled runtime: true permanent: true For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.firewall/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification On the control node, run the following command to remotely check the information about the dmz zone on your managed node: Additional resources /usr/share/ansible/roles/rhel-system-roles.firewall/README.md file /usr/share/doc/rhel-system-roles/firewall/ directory
[ "--- - name: Reset firewalld example hosts: managed-node-01.example.com tasks: - name: Reset firewalld ansible.builtin.include_role: name: rhel-system-roles.firewall vars: firewall: - previous: replaced", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m ansible.builtin.command -a 'firewall-cmd --list-all-zones'", "--- - name: Configure firewalld hosts: managed-node-01.example.com tasks: - name: Forward incoming traffic on port 8080 to 443 ansible.builtin.include_role: name: rhel-system-roles.firewall vars: firewall: - forward_port: 8080/tcp;443; state: enabled runtime: true permanent: true", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m ansible.builtin.command -a 'firewall-cmd --list-forward-ports' managed-node-01.example.com | CHANGED | rc=0 >> port=8080:proto=tcp:toport=443:toaddr=", "--- - name: Configure firewalld hosts: managed-node-01.example.com tasks: - name: Creating a DMZ with access to HTTPS port and masquerading for hosts in DMZ ansible.builtin.include_role: name: rhel-system-roles.firewall vars: firewall: - zone: dmz interface: enp1s0 service: https state: enabled runtime: true permanent: true", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m ansible.builtin.command -a 'firewall-cmd --zone=dmz --list-all' managed-node-01.example.com | CHANGED | rc=0 >> dmz (active) target: default icmp-block-inversion: no interfaces: enp1s0 sources: services: https ssh ports: protocols: forward: no masquerade: no forward-ports: source-ports: icmp-blocks:" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/automating_system_administration_by_using_rhel_system_roles/assembly_configuring-firewalld-using-system-roles_automating-system-administration-by-using-rhel-system-roles
Chapter 13. Converting a connected cluster to a disconnected cluster
Chapter 13. Converting a connected cluster to a disconnected cluster There might be some scenarios where you need to convert your OpenShift Container Platform cluster from a connected cluster to a disconnected cluster. A disconnected cluster, also known as a restricted cluster, does not have an active connection to the internet. As such, you must mirror the contents of your registries and installation media. You can create this mirror registry on a host that can access both the internet and your closed network, or copy images to a device that you can move across network boundaries. This topic describes the general process for converting an existing, connected cluster into a disconnected cluster. 13.1. About the mirror registry You can mirror the images that are required for OpenShift Container Platform installation and subsequent product updates to a container mirror registry such as Red Hat Quay, JFrog Artifactory, Sonatype Nexus Repository, or Harbor. If you do not have access to a large-scale container registry, you can use the mirror registry for Red Hat OpenShift , a small-scale container registry included with OpenShift Container Platform subscriptions. You can use any container registry that supports Docker v2-2 , such as Red Hat Quay, the mirror registry for Red Hat OpenShift , Artifactory, Sonatype Nexus Repository, or Harbor. Regardless of your chosen registry, the procedure to mirror content from Red Hat hosted sites on the internet to an isolated image registry is the same. After you mirror the content, you configure each cluster to retrieve this content from your mirror registry. Important The OpenShift image registry cannot be used as the target registry because it does not support pushing without a tag, which is required during the mirroring process. If choosing a container registry that is not the mirror registry for Red Hat OpenShift , it must be reachable by every machine in the clusters that you provision. If the registry is unreachable, installation, updating, or normal operations such as workload relocation might fail. For that reason, you must run mirror registries in a highly available way, and the mirror registries must at least match the production availability of your OpenShift Container Platform clusters. When you populate your mirror registry with OpenShift Container Platform images, you can follow two scenarios. If you have a host that can access both the internet and your mirror registry, but not your cluster nodes, you can directly mirror the content from that machine. This process is referred to as connected mirroring . If you have no such host, you must mirror the images to a file system and then bring that host or removable media into your restricted environment. This process is referred to as disconnected mirroring . For mirrored registries, to view the source of pulled images, you must review the Trying to access log entry in the CRI-O logs. Other methods to view the image pull source, such as using the crictl images command on a node, show the non-mirrored image name, even though the image is pulled from the mirrored location. Note Red Hat does not test third party registries with OpenShift Container Platform. 13.2. Prerequisites The oc client is installed. A running cluster. An installed mirror registry, which is a container image registry that supports Docker v2-2 in the location that will host the OpenShift Container Platform cluster, such as one of the following registries: Red Hat Quay JFrog Artifactory Sonatype Nexus Repository Harbor If you have an subscription to Red Hat Quay, see the documentation on deploying Red Hat Quay for proof-of-concept purposes or by using the Quay Operator . The mirror repository must be configured to share images. For example, a Red Hat Quay repository requires Organizations in order to share images. Access to the internet to obtain the necessary container images. 13.3. Preparing the cluster for mirroring Before disconnecting your cluster, you must mirror, or copy, the images to a mirror registry that is reachable by every node in your disconnected cluster. In order to mirror the images, you must prepare your cluster by: Adding the mirror registry certificates to the list of trusted CAs on your host. Creating a .dockerconfigjson file that contains your image pull secret, which is from the cloud.openshift.com token. Procedure Configuring credentials that allow image mirroring: Add the CA certificate for the mirror registry, in the simple PEM or DER file formats, to the list of trusted CAs. For example: USD cp </path/to/cert.crt> /usr/share/pki/ca-trust-source/anchors/ where, </path/to/cert.crt> Specifies the path to the certificate on your local file system. Update the CA trust. For example, in Linux: USD update-ca-trust Extract the .dockerconfigjson file from the global pull secret: USD oc extract secret/pull-secret -n openshift-config --confirm --to=. Example output .dockerconfigjson Edit the .dockerconfigjson file to add your mirror registry and authentication credentials and save it as a new file: {"auths":{"<local_registry>": {"auth": "<credentials>","email": "[email protected]"}}},"<registry>:<port>/<namespace>/":{"auth":"<token>"}}} where: <local_registry> Specifies the registry domain name, and optionally the port, that your mirror registry uses to serve content. auth Specifies the base64-encoded user name and password for your mirror registry. <registry>:<port>/<namespace> Specifies the mirror registry details. <token> Specifies the base64-encoded username:password for your mirror registry. For example: USD {"auths":{"cloud.openshift.com":{"auth":"b3BlbnNoaWZ0Y3UjhGOVZPT0lOMEFaUjdPUzRGTA==","email":"[email protected]"}, "quay.io":{"auth":"b3BlbnNoaWZ0LXJlbGVhc2UtZGOVZPT0lOMEFaUGSTd4VGVGVUjdPUzRGTA==","email":"[email protected]"}, "registry.connect.redhat.com"{"auth":"NTE3MTMwNDB8dWhjLTFEZlN3VHkxOSTd4VGVGVU1MdTpleUpoYkdjaUailA==","email":"[email protected]"}, "registry.redhat.io":{"auth":"NTE3MTMwNDB8dWhjLTFEZlN3VH3BGSTd4VGVGVU1MdTpleUpoYkdjaU9fZw==","email":"[email protected]"}, "registry.svc.ci.openshift.org":{"auth":"dXNlcjpyWjAwWVFjSEJiT2RKVW1pSmg4dW92dGp1SXRxQ3RGN1pwajJhN1ZXeTRV"},"my-registry:5000/my-namespace/":{"auth":"dXNlcm5hbWU6cGFzc3dvcmQ="}}} 13.4. Mirroring the images After the cluster is properly configured, you can mirror the images from your external repositories to the mirror repository. Procedure Mirror the Operator Lifecycle Manager (OLM) images: USD oc adm catalog mirror registry.redhat.io/redhat/redhat-operator-index:v{product-version} <mirror_registry>:<port>/olm -a <reg_creds> where: product-version Specifies the tag that corresponds to the version of OpenShift Container Platform to install, such as 4.8 . mirror_registry Specifies the fully qualified domain name (FQDN) for the target registry and namespace to mirror the Operator content to, where <namespace> is any existing namespace on the registry. reg_creds Specifies the location of your modified .dockerconfigjson file. For example: USD oc adm catalog mirror registry.redhat.io/redhat/redhat-operator-index:v4.8 mirror.registry.com:443/olm -a ./.dockerconfigjson --index-filter-by-os='.*' Mirror the content for any other Red Hat-provided Operator: USD oc adm catalog mirror <index_image> <mirror_registry>:<port>/<namespace> -a <reg_creds> where: index_image Specifies the index image for the catalog that you want to mirror. mirror_registry Specifies the FQDN for the target registry and namespace to mirror the Operator content to, where <namespace> is any existing namespace on the registry. reg_creds Optional: Specifies the location of your registry credentials file, if required. For example: USD oc adm catalog mirror registry.redhat.io/redhat/community-operator-index:v4.8 mirror.registry.com:443/olm -a ./.dockerconfigjson --index-filter-by-os='.*' Mirror the OpenShift Container Platform image repository: USD oc adm release mirror -a .dockerconfigjson --from=quay.io/openshift-release-dev/ocp-release:v<product-version>-<architecture> --to=<local_registry>/<local_repository> --to-release-image=<local_registry>/<local_repository>:v<product-version>-<architecture> where: product-version Specifies the tag that corresponds to the version of OpenShift Container Platform to install, such as 4.8.15-x86_64 . architecture Specifies the type of architecture for your server, such as x86_64 . local_registry Specifies the registry domain name for your mirror repository. local_repository Specifies the name of the repository to create in your registry, such as ocp4/openshift4 . For example: USD oc adm release mirror -a .dockerconfigjson --from=quay.io/openshift-release-dev/ocp-release:4.8.15-x86_64 --to=mirror.registry.com:443/ocp/release --to-release-image=mirror.registry.com:443/ocp/release:4.8.15-x86_64 Example output info: Mirroring 109 images to mirror.registry.com/ocp/release ... mirror.registry.com:443/ ocp/release manifests: sha256:086224cadce475029065a0efc5244923f43fb9bb3bb47637e0aaf1f32b9cad47 -> 4.8.15-x86_64-thanos sha256:0a214f12737cb1cfbec473cc301aa2c289d4837224c9603e99d1e90fc00328db -> 4.8.15-x86_64-kuryr-controller sha256:0cf5fd36ac4b95f9de506623b902118a90ff17a07b663aad5d57c425ca44038c -> 4.8.15-x86_64-pod sha256:0d1c356c26d6e5945a488ab2b050b75a8b838fc948a75c0fa13a9084974680cb -> 4.8.15-x86_64-kube-client-agent ..... sha256:66e37d2532607e6c91eedf23b9600b4db904ce68e92b43c43d5b417ca6c8e63c mirror.registry.com:443/ocp/release:4.5.41-multus-admission-controller sha256:d36efdbf8d5b2cbc4dcdbd64297107d88a31ef6b0ec4a39695915c10db4973f1 mirror.registry.com:443/ocp/release:4.5.41-cluster-kube-scheduler-operator sha256:bd1baa5c8239b23ecdf76819ddb63cd1cd6091119fecdbf1a0db1fb3760321a2 mirror.registry.com:443/ocp/release:4.5.41-aws-machine-controllers info: Mirroring completed in 2.02s (0B/s) Success Update image: mirror.registry.com:443/ocp/release:4.5.41-x86_64 Mirror prefix: mirror.registry.com:443/ocp/release Mirror any other registries, as needed: USD oc image mirror <online_registry>/my/image:latest <mirror_registry> Additional information For more information about mirroring Operator catalogs, see Mirroring an Operator catalog . For more information about the oc adm catalog mirror command, see the OpenShift CLI administrator command reference . 13.5. Configuring the cluster for the mirror registry After creating and mirroring the images to the mirror registry, you must modify your cluster so that pods can pull images from the mirror registry. You must: Add the mirror registry credentials to the global pull secret. Add the mirror registry server certificate to the cluster. Create an ImageContentSourcePolicy custom resource (ICSP), which associates the mirror registry with the source registry. Add mirror registry credential to the cluster global pull-secret: USD oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=<pull_secret_location> 1 1 Provide the path to the new pull secret file. For example: USD oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=.mirrorsecretconfigjson Add the CA-signed mirror registry server certificate to the nodes in the cluster: Create a config map that includes the server certificate for the mirror registry USD oc create configmap <config_map_name> --from-file=<mirror_address_host>..<port>=USDpath/ca.crt -n openshift-config For example: S oc create configmap registry-config --from-file=mirror.registry.com..443=/root/certs/ca-chain.cert.pem -n openshift-config Use the config map to update the image.config.openshift.io/cluster custom resource (CR). OpenShift Container Platform applies the changes to this CR to all nodes in the cluster: USD oc patch image.config.openshift.io/cluster --patch '{"spec":{"additionalTrustedCA":{"name":"<config_map_name>"}}}' --type=merge For example: USD oc patch image.config.openshift.io/cluster --patch '{"spec":{"additionalTrustedCA":{"name":"registry-config"}}}' --type=merge Create an ICSP to redirect container pull requests from the online registries to the mirror registry: Create the ImageContentSourcePolicy custom resource: apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: mirror-ocp spec: repositoryDigestMirrors: - mirrors: - mirror.registry.com:443/ocp/release 1 source: quay.io/openshift-release-dev/ocp-release 2 - mirrors: - mirror.registry.com:443/ocp/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 1 Specifies the name of the mirror image registry and repository. 2 Specifies the online registry and repository containing the content that is mirrored. Create the ICSP object: USD oc create -f registryrepomirror.yaml Example output imagecontentsourcepolicy.operator.openshift.io/mirror-ocp created OpenShift Container Platform applies the changes to this CR to all nodes in the cluster. Verify that the credentials, CA, and ICSP for mirror registry were added: Log into a node: USD oc debug node/<node_name> Set /host as the root directory within the debug shell: sh-4.4# chroot /host Check the config.json file for the credentials: sh-4.4# cat /var/lib/kubelet/config.json Example output {"auths":{"brew.registry.redhat.io":{"xx=="},"brewregistry.stage.redhat.io":{"auth":"xxx=="},"mirror.registry.com:443":{"auth":"xx="}}} 1 1 Ensure that the mirror registry and credentials are present. Change to the certs.d directory sh-4.4# cd /etc/docker/certs.d/ List the certificates in the certs.d directory: sh-4.4# ls Example output 1 Ensure that the mirror registry is in the list. Check that the ICSP added the mirror registry to the registries.conf file: sh-4.4# cat /etc/containers/registries.conf Example output unqualified-search-registries = ["registry.access.redhat.com", "docker.io"] [[registry]] prefix = "" location = "quay.io/openshift-release-dev/ocp-release" mirror-by-digest-only = true [[registry.mirror]] location = "mirror.registry.com:443/ocp/release" [[registry]] prefix = "" location = "quay.io/openshift-release-dev/ocp-v4.0-art-dev" mirror-by-digest-only = true [[registry.mirror]] location = "mirror.registry.com:443/ocp/release" The registry.mirror parameters indicate that the mirror registry is searched before the original registry. Exit the node. sh-4.4# exit 13.6. Ensure applications continue to work Before disconnecting the cluster from the network, ensure that your cluster is working as expected and all of your applications are working as expected. Procedure Use the following commands to check the status of your cluster: Ensure your pods are running: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE kube-system apiserver-watcher-ci-ln-47ltxtb-f76d1-mrffg-master-0 1/1 Running 0 39m kube-system apiserver-watcher-ci-ln-47ltxtb-f76d1-mrffg-master-1 1/1 Running 0 39m kube-system apiserver-watcher-ci-ln-47ltxtb-f76d1-mrffg-master-2 1/1 Running 0 39m openshift-apiserver-operator openshift-apiserver-operator-79c7c646fd-5rvr5 1/1 Running 3 45m openshift-apiserver apiserver-b944c4645-q694g 2/2 Running 0 29m openshift-apiserver apiserver-b944c4645-shdxb 2/2 Running 0 31m openshift-apiserver apiserver-b944c4645-x7rf2 2/2 Running 0 33m ... Ensure your nodes are in the READY status: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ci-ln-47ltxtb-f76d1-mrffg-master-0 Ready master 42m v1.29.4 ci-ln-47ltxtb-f76d1-mrffg-master-1 Ready master 42m v1.29.4 ci-ln-47ltxtb-f76d1-mrffg-master-2 Ready master 42m v1.29.4 ci-ln-47ltxtb-f76d1-mrffg-worker-a-gsxbz Ready worker 35m v1.29.4 ci-ln-47ltxtb-f76d1-mrffg-worker-b-5qqdx Ready worker 35m v1.29.4 ci-ln-47ltxtb-f76d1-mrffg-worker-c-rjkpq Ready worker 34m v1.29.4 13.7. Disconnect the cluster from the network After mirroring all the required repositories and configuring your cluster to work as a disconnected cluster, you can disconnect the cluster from the network. Note The Insights Operator is degraded when the cluster loses its Internet connection. You can avoid this problem by temporarily disabling the Insights Operator until you can restore it. 13.8. Restoring a degraded Insights Operator Disconnecting the cluster from the network necessarily causes the cluster to lose the Internet connection. The Insights Operator becomes degraded because it requires access to Red Hat Insights . This topic describes how to recover from a degraded Insights Operator. Procedure Edit your .dockerconfigjson file to remove the cloud.openshift.com entry, for example: "cloud.openshift.com":{"auth":"<hash>","email":"[email protected]"} Save the file. Update the cluster secret with the edited .dockerconfigjson file: USD oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=./.dockerconfigjson Verify that the Insights Operator is no longer degraded: USD oc get co insights Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE insights 4.5.41 True False False 3d 13.9. Restoring the network If you want to reconnect a disconnected cluster and pull images from online registries, delete the cluster's ImageContentSourcePolicy (ICSP) objects. Without the ICSP, pull requests to external registries are no longer redirected to the mirror registry. Procedure View the ICSP objects in your cluster: USD oc get imagecontentsourcepolicy Example output NAME AGE mirror-ocp 6d20h ocp4-index-0 6d18h qe45-index-0 6d15h Delete all the ICSP objects you created when disconnecting your cluster: USD oc delete imagecontentsourcepolicy <icsp_name> <icsp_name> <icsp_name> For example: USD oc delete imagecontentsourcepolicy mirror-ocp ocp4-index-0 qe45-index-0 Example output imagecontentsourcepolicy.operator.openshift.io "mirror-ocp" deleted imagecontentsourcepolicy.operator.openshift.io "ocp4-index-0" deleted imagecontentsourcepolicy.operator.openshift.io "qe45-index-0" deleted Wait for all the nodes to restart and return to the READY status and verify that the registries.conf file is pointing to the original registries and not the mirror registries: Log into a node: USD oc debug node/<node_name> Set /host as the root directory within the debug shell: sh-4.4# chroot /host Examine the registries.conf file: sh-4.4# cat /etc/containers/registries.conf Example output unqualified-search-registries = ["registry.access.redhat.com", "docker.io"] 1 1 The registry and registry.mirror entries created by the ICSPs you deleted are removed.
[ "cp </path/to/cert.crt> /usr/share/pki/ca-trust-source/anchors/", "update-ca-trust", "oc extract secret/pull-secret -n openshift-config --confirm --to=.", ".dockerconfigjson", "{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}},\"<registry>:<port>/<namespace>/\":{\"auth\":\"<token>\"}}}", "{\"auths\":{\"cloud.openshift.com\":{\"auth\":\"b3BlbnNoaWZ0Y3UjhGOVZPT0lOMEFaUjdPUzRGTA==\",\"email\":\"[email protected]\"}, \"quay.io\":{\"auth\":\"b3BlbnNoaWZ0LXJlbGVhc2UtZGOVZPT0lOMEFaUGSTd4VGVGVUjdPUzRGTA==\",\"email\":\"[email protected]\"}, \"registry.connect.redhat.com\"{\"auth\":\"NTE3MTMwNDB8dWhjLTFEZlN3VHkxOSTd4VGVGVU1MdTpleUpoYkdjaUailA==\",\"email\":\"[email protected]\"}, \"registry.redhat.io\":{\"auth\":\"NTE3MTMwNDB8dWhjLTFEZlN3VH3BGSTd4VGVGVU1MdTpleUpoYkdjaU9fZw==\",\"email\":\"[email protected]\"}, \"registry.svc.ci.openshift.org\":{\"auth\":\"dXNlcjpyWjAwWVFjSEJiT2RKVW1pSmg4dW92dGp1SXRxQ3RGN1pwajJhN1ZXeTRV\"},\"my-registry:5000/my-namespace/\":{\"auth\":\"dXNlcm5hbWU6cGFzc3dvcmQ=\"}}}", "oc adm catalog mirror registry.redhat.io/redhat/redhat-operator-index:v{product-version} <mirror_registry>:<port>/olm -a <reg_creds>", "oc adm catalog mirror registry.redhat.io/redhat/redhat-operator-index:v4.8 mirror.registry.com:443/olm -a ./.dockerconfigjson --index-filter-by-os='.*'", "oc adm catalog mirror <index_image> <mirror_registry>:<port>/<namespace> -a <reg_creds>", "oc adm catalog mirror registry.redhat.io/redhat/community-operator-index:v4.8 mirror.registry.com:443/olm -a ./.dockerconfigjson --index-filter-by-os='.*'", "oc adm release mirror -a .dockerconfigjson --from=quay.io/openshift-release-dev/ocp-release:v<product-version>-<architecture> --to=<local_registry>/<local_repository> --to-release-image=<local_registry>/<local_repository>:v<product-version>-<architecture>", "oc adm release mirror -a .dockerconfigjson --from=quay.io/openshift-release-dev/ocp-release:4.8.15-x86_64 --to=mirror.registry.com:443/ocp/release --to-release-image=mirror.registry.com:443/ocp/release:4.8.15-x86_64", "info: Mirroring 109 images to mirror.registry.com/ocp/release mirror.registry.com:443/ ocp/release manifests: sha256:086224cadce475029065a0efc5244923f43fb9bb3bb47637e0aaf1f32b9cad47 -> 4.8.15-x86_64-thanos sha256:0a214f12737cb1cfbec473cc301aa2c289d4837224c9603e99d1e90fc00328db -> 4.8.15-x86_64-kuryr-controller sha256:0cf5fd36ac4b95f9de506623b902118a90ff17a07b663aad5d57c425ca44038c -> 4.8.15-x86_64-pod sha256:0d1c356c26d6e5945a488ab2b050b75a8b838fc948a75c0fa13a9084974680cb -> 4.8.15-x86_64-kube-client-agent ..... sha256:66e37d2532607e6c91eedf23b9600b4db904ce68e92b43c43d5b417ca6c8e63c mirror.registry.com:443/ocp/release:4.5.41-multus-admission-controller sha256:d36efdbf8d5b2cbc4dcdbd64297107d88a31ef6b0ec4a39695915c10db4973f1 mirror.registry.com:443/ocp/release:4.5.41-cluster-kube-scheduler-operator sha256:bd1baa5c8239b23ecdf76819ddb63cd1cd6091119fecdbf1a0db1fb3760321a2 mirror.registry.com:443/ocp/release:4.5.41-aws-machine-controllers info: Mirroring completed in 2.02s (0B/s) Success Update image: mirror.registry.com:443/ocp/release:4.5.41-x86_64 Mirror prefix: mirror.registry.com:443/ocp/release", "oc image mirror <online_registry>/my/image:latest <mirror_registry>", "oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=<pull_secret_location> 1", "oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=.mirrorsecretconfigjson", "oc create configmap <config_map_name> --from-file=<mirror_address_host>..<port>=USDpath/ca.crt -n openshift-config", "S oc create configmap registry-config --from-file=mirror.registry.com..443=/root/certs/ca-chain.cert.pem -n openshift-config", "oc patch image.config.openshift.io/cluster --patch '{\"spec\":{\"additionalTrustedCA\":{\"name\":\"<config_map_name>\"}}}' --type=merge", "oc patch image.config.openshift.io/cluster --patch '{\"spec\":{\"additionalTrustedCA\":{\"name\":\"registry-config\"}}}' --type=merge", "apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: mirror-ocp spec: repositoryDigestMirrors: - mirrors: - mirror.registry.com:443/ocp/release 1 source: quay.io/openshift-release-dev/ocp-release 2 - mirrors: - mirror.registry.com:443/ocp/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "oc create -f registryrepomirror.yaml", "imagecontentsourcepolicy.operator.openshift.io/mirror-ocp created", "oc debug node/<node_name>", "sh-4.4# chroot /host", "sh-4.4# cat /var/lib/kubelet/config.json", "{\"auths\":{\"brew.registry.redhat.io\":{\"xx==\"},\"brewregistry.stage.redhat.io\":{\"auth\":\"xxx==\"},\"mirror.registry.com:443\":{\"auth\":\"xx=\"}}} 1", "sh-4.4# cd /etc/docker/certs.d/", "sh-4.4# ls", "image-registry.openshift-image-registry.svc.cluster.local:5000 image-registry.openshift-image-registry.svc:5000 mirror.registry.com:443 1", "sh-4.4# cat /etc/containers/registries.conf", "unqualified-search-registries = [\"registry.access.redhat.com\", \"docker.io\"] [[registry]] prefix = \"\" location = \"quay.io/openshift-release-dev/ocp-release\" mirror-by-digest-only = true [[registry.mirror]] location = \"mirror.registry.com:443/ocp/release\" [[registry]] prefix = \"\" location = \"quay.io/openshift-release-dev/ocp-v4.0-art-dev\" mirror-by-digest-only = true [[registry.mirror]] location = \"mirror.registry.com:443/ocp/release\"", "sh-4.4# exit", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE kube-system apiserver-watcher-ci-ln-47ltxtb-f76d1-mrffg-master-0 1/1 Running 0 39m kube-system apiserver-watcher-ci-ln-47ltxtb-f76d1-mrffg-master-1 1/1 Running 0 39m kube-system apiserver-watcher-ci-ln-47ltxtb-f76d1-mrffg-master-2 1/1 Running 0 39m openshift-apiserver-operator openshift-apiserver-operator-79c7c646fd-5rvr5 1/1 Running 3 45m openshift-apiserver apiserver-b944c4645-q694g 2/2 Running 0 29m openshift-apiserver apiserver-b944c4645-shdxb 2/2 Running 0 31m openshift-apiserver apiserver-b944c4645-x7rf2 2/2 Running 0 33m", "oc get nodes", "NAME STATUS ROLES AGE VERSION ci-ln-47ltxtb-f76d1-mrffg-master-0 Ready master 42m v1.29.4 ci-ln-47ltxtb-f76d1-mrffg-master-1 Ready master 42m v1.29.4 ci-ln-47ltxtb-f76d1-mrffg-master-2 Ready master 42m v1.29.4 ci-ln-47ltxtb-f76d1-mrffg-worker-a-gsxbz Ready worker 35m v1.29.4 ci-ln-47ltxtb-f76d1-mrffg-worker-b-5qqdx Ready worker 35m v1.29.4 ci-ln-47ltxtb-f76d1-mrffg-worker-c-rjkpq Ready worker 34m v1.29.4", "\"cloud.openshift.com\":{\"auth\":\"<hash>\",\"email\":\"[email protected]\"}", "oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=./.dockerconfigjson", "oc get co insights", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE insights 4.5.41 True False False 3d", "oc get imagecontentsourcepolicy", "NAME AGE mirror-ocp 6d20h ocp4-index-0 6d18h qe45-index-0 6d15h", "oc delete imagecontentsourcepolicy <icsp_name> <icsp_name> <icsp_name>", "oc delete imagecontentsourcepolicy mirror-ocp ocp4-index-0 qe45-index-0", "imagecontentsourcepolicy.operator.openshift.io \"mirror-ocp\" deleted imagecontentsourcepolicy.operator.openshift.io \"ocp4-index-0\" deleted imagecontentsourcepolicy.operator.openshift.io \"qe45-index-0\" deleted", "oc debug node/<node_name>", "sh-4.4# chroot /host", "sh-4.4# cat /etc/containers/registries.conf", "unqualified-search-registries = [\"registry.access.redhat.com\", \"docker.io\"] 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/postinstallation_configuration/connected-to-disconnected
Chapter 1. Introduction
Chapter 1. Introduction This guide provides information about constructing a spine-leaf network topology for your Red Hat OpenStack Platform environment. This includes a full end-to-end scenario and example files to help replicate a more extensive network topology within your own environment. 1.1. Spine-leaf networking Red Hat OpenStack Platform has a composable network architecture that you can use to adapt your networking to the routed spine-leaf data center topology. In a practical application of routed spine-leaf, a leaf is represented as a composable Compute or Storage role usually in a data center rack, as shown in Figure 1.1, "Routed spine-leaf example" . The Leaf 0 rack has an undercloud node, Controller nodes, and Compute nodes. The composable networks are presented to the nodes, which have been assigned to composable roles. The following diagram contains the following configuration: The StorageLeaf networks are presented to the Ceph storage and Compute nodes. The NetworkLeaf represents an example of any network you might want to compose. Figure 1.1. Routed spine-leaf example 1.2. Spine-leaf network topology The spine-leaf scenario takes advantage of OpenStack Networking (neutron) functionality to define multiple subnets within segments of a single network. Each network uses a base network which acts as Leaf 0. Director creates Leaf 1 and Leaf 2 subnets as segments of the main network. This scenario uses the following networks: Table 1.1. Leaf 0 Networks (base networks) Network Roles attached Subnet Provisioning / Ctlplane / Leaf0 Controller, ComputeLeaf0, CephStorageLeaf0 192.168.10.0/24 Storage Controller, ComputeLeaf0, CephStorageLeaf0 172.16.0.0/24 StorageMgmt Controller, CephStorageLeaf0 172.17.0.0/24 InternalApi Controller, ComputeLeaf0 172.18.0.0/24 Tenant [1] Controller, ComputeLeaf0 172.19.0.0/24 External Controller 10.1.1.0/24 [1] Tenant networks are also known as project networks. Table 1.2. Leaf 1 Networks Network Roles attached Subnet Provisioning / Ctlplane / Leaf1 ComputeLeaf1, CephStorageLeaf1 192.168.11.0/24 StorageLeaf1 ComputeLeaf1, CephStorageLeaf1 172.16.1.0/24 StorageMgmtLeaf1 CephStorageLeaf1 172.17.1.0/24 InternalApiLeaf1 ComputeLeaf1 172.18.1.0/24 TenantLeaf1 [1] ComputeLeaf1 172.19.1.0/24 [1] Tenant networks are also known as project networks. Table 1.3. Leaf 2 Networks Network Roles attached Subnet Provisioning / Ctlplane / Leaf2 ComputeLeaf2, CephStorageLeaf2 192.168.12.0/24 StorageLeaf2 ComputeLeaf2, CephStorageLeaf2 172.16.2.0/24 StorageMgmtLeaf2 CephStorageLeaf2 172.17.2.0/24 InternalApiLeaf2 ComputeLeaf2 172.18.2.0/24 TenantLeaf2 [1] ComputeLeaf2 172.19.2.0/24 [1] Tenant networks are also known as project networks. Figure 1.2. Spine-leaf network topology 1.3. Spine-leaf requirements To deploy the overcloud on a network with a L3 routed architecture, complete the following prerequisite steps: Layer-3 routing Configure the routing of the network infrastructure to enable traffic between the different L2 segments. You can configure this routing statically or dynamically. DHCP-Relay Each L2 segment not local to the undercloud must provide dhcp-relay . You must forward DHCP requests to the undercloud on the provisioning network segment where the undercloud is connected. Note The undercloud uses two DHCP servers. One for baremetal node introspection, and another for deploying overcloud nodes. Ensure that you read DHCP relay configuration to understand the requirements when you configure dhcp-relay . 1.4. Spine-leaf limitations Some roles, such as the Controller role, use virtual IP addresses and clustering. The mechanism behind this functionality requires L2 network connectivity between these nodes. You must place these nodes within the same leaf. Similar restrictions apply to Networker nodes. The network service implements highly-available default paths in the network with Virtual Router Redundancy Protocol (VRRP). Because VRRP uses a virtual router IP address, you must connect master and backup nodes to the same L2 network segment. When you use tenant or provider networks with VLAN segmentation, you must share the particular VLANs between all Networker and Compute nodes. Note It is possible to configure the network service with multiple sets of Networker nodes. Each set of Networker nodes share routes for their networks, and VRRP provides highly-available default paths within each set of Networker nodes. In this type of configuration, all Networker nodes that share networks must be on the same L2 network segment.
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/spine_leaf_networking/assembly_introduction-to-spine-leaf-networking
Chapter 1. Collaborating
Chapter 1. Collaborating Effective revision control is essential to all multi-developer projects. It allows all developers in a team to create, review, revise, and document code in a systematic and orderly manner. Red Hat Enterprise Linux 6 supports three of the most popular open-source revision control systems: Git , SVN , and CVS . The following sections provide a brief overview and references to relevant documentation for each tool. 1.1. Git Git is a distributed revision control system with a peer-to-peer architecture. As opposed to centralized version control systems with a client-server model, Git ensures that each working copy of a Git repository is its exact copy with complete revision history. This not only allows you to work on and contribute to projects without the need to have permission to push your changes to their official repositories, but also makes it possible for you to work with no network connection. 1.1.1. Installing and Configuring Git Installing the git Package In Red Hat Enterprise Linux 6, Git is provided by the git package. To install the git package and all its dependencies on your system, type the following at a shell prompt as root : Configuring the Default Text Editor Certain Git commands, such as git commit , require the user to write a short message or make some changes in an external text editor. To determine which text editor to start, Git attempts to read the value of the GIT_EDITOR environment variable, the core.editor configuration option, the VISUAL environment variable, and finally the EDITOR environment variable in this particular order. If none of these options and variables are specified, the git command starts vi as a reasonable default option. To change the value of the core.editor configuration option in order to specify a different text editor, type the following at a shell prompt: git config --global core.editor command Replace command with the command to be used to start the selected text editor. Example 1.1. Configuring the Default Text Editor To configure Git to use vim as the default text editor, type the following at a shell prompt: Setting Up User Information In Git , each commit (or revision) is associated with the full name and email of the person responsible for it. By default, Git uses an identity based on the user name and the host name. To change the full name associated with your Git commits, type the following at a shell prompt: git config --global user.name " full name " To change the email address associated with your Git commits, type: git config --global user.email " email_address " Example 1.2. Setting Up User Information To configure Git to use John Doe as your full name and [email protected] as your email address, type the following at a shell prompt: 1.1.2. Creating a New Repository A repository is a place where Git stores all files that are under revision control, as well as additional data related to these files, such as the complete history of changes or information about who made those changes and when. Unlike in centralized revision control systems like Subversion or CVS, a Git repository and a working directory are usually the same. A typical Git repository also only stores a single project and when publicly accessible, it allows anyone to create its clone with a complete revision history. Initializing an Empty Repository To create a new, empty Git repository, change to the directory in which you want to keep the repository and type the following at a shell prompt: git init This creates a hidden directory named .git in which all repository information is stored. Importing Data to a Repository To put an existing project under revision control, create a Git repository in the directory with the project and run the following command: git add . This marks all files and directories in the current working directory as ready to be added to the Git repository. To proceed and actually add this content to the repository, commit the changes by typing the following at a shell prompt: git commit [ -m " commit message " ] Replace commit message with a short description of your revision. If you omit the -m option, this command allows you to write the commit message in an external text editor. For information on how to configure the default text editor, see the section called "Configuring the Default Text Editor" . 1.1.3. Cloning an Existing Repository To clone an existing Git repository, type the following at a shell prompt: git clone git_repository [ directory ] Replace git_repository with a URL or a path to the Git repository you want to clone, and directory with a path to the directory in which you want to store the clone. 1.1.4. Adding, Renaming, and Deleting Files Adding Files and Directories To add an existing file to a Git repository and put it under revision control, change to the directory with your local Git repository and type the following at a shell prompt: git add file ... Replace file with the file or files you want to add. This command marks the selected file or files as ready to be added to the Git repository. Similarly, to add all files that are stored in a certain directory to a Git repository, type: git add directory ... Replace directory with the directory or directories you want to add. This command marks all files in the selected directory or directories as ready to be added to the Git repository. To proceed and actually add this content to the repository, commit the changes as described in Section 1.1.6, "Committing Changes" . Renaming Files and Directories To rename an existing file or directory in a Git repository, change to the directory with your local Git repository and type the following at a shell prompt: git mv old_name new_name Replace old_name with the current name of the file or directory and new_name with the new name. This command renames the selected file or directory and marks it as ready to be renamed in the Git repository. To proceed and actually rename the content in the repository, commit the changes as described in Section 1.1.6, "Committing Changes" . Deleting Files and Directories To delete an existing file from a Git repository, change to the directory with your local Git repository and type the following at a shell prompt: git rm file ... Replace file with the file or files you want to delete. This command deletes all selected files and marks them as ready to be deleted form the Git repository. Similarly, to delete all files that are stored in a certain directory from a Git repository, type: git rm -r directory ... Replace directory with the directory or directories you want to delete. This command deletes all selected directories and marks them as ready to be deleted from the Git repository. To proceed and actually delete this content from the repository, commit the changes as described in Section 1.1.6, "Committing Changes" . 1.1.5. Viewing Changes Viewing the Current Status To determine the current status of your local Git repository, change to the directory with the repository and type the following command at a shell prompt: git status This command displays information about all uncommitted changes in the repository ( new file , renamed , deleted , or modified ) and tells you which changes will be applied the time you commit them. For information on how to commit your changes, see Section 1.1.6, "Committing Changes" . Viewing Differences To view all changes in a Git repository, change to the directory with the repository and type the following at a shell prompt: git diff This command displays changes between the files in the repository and their latest revision. If you are only interested in changes in a particular file, supply its name on the command line as follows: git diff file ... Replace file with the file or files you want to view. 1.1.6. Committing Changes To apply your changes to a Git repository and create a new revision, change to the directory with the repository and type the following command at a shell prompt: git commit [ -m " commit message " ] Replace commit message with a short description of your revision. This command commits all changes in files that are explicitly marked as ready to be committed. To commit changes in all files that are under revision control, add the -a command line option as follows: git commit -a [ -m " commit message " ] Note that if you omit the -m option, the command allows you to write the commit message in an external text editor. For information on how to configure the default text editor, see the section called "Configuring the Default Text Editor" . 1.1.7. Sharing Changes Unlike in centralized version control systems such as CVS or Subversion, when working with Git , project contributors usually do not make their changes in a single, central repository. Instead, they either create a publicly accessible clone of their local repository, or submit their changes to others over email as patches. Pushing Changes to a Public Repository To push your changes to a publicly accessible Git repository, change to the directory with your local repository and type the following at a shell prompt: git push remote_repository Replace remote_repository with the name of the remote repository you want to push your changes to. Note that the repository from which you originally cloned your local copy is automatically named origin . Creating Patches from Individual Commits To create patches from your commits, change to the directory with your local Git repository and type the following at a shell prompt: git format-patch remote_repository Replace remote_repository with the name of the remote repository from which you made your local copy. This creates a patch for each commit that is not present in this remote repository. 1.1.8. Updating a Repository To update your local copy of a Git repository and get the latest changes from a remote repository, change to the directory with your local Git repository and type the following at a shell prompt: git fetch remote_repository Replace remote_repository with the name of the remote repository. This command fetches information about the current status of the remote repository, allowing you to review these changes before applying them to your local copy. To proceed and merge these changes with what you have in your local Git repository, type: git merge remote_repository Alternatively, you can perform both these steps at the same time by using the following command instead: git pull remote_repository 1.1.9. Additional Resources A detailed description of Git and its features is beyond the scope of this book. For more information about this revision control system, see the resources listed below. Installed Documentation gittutorial (7) - The manual page named gittutorial provides a brief introduction to Git and its usage. gittutorial-2 (7) - The manual page named gittutorial-2 provides the second part of a brief introduction to Git and its usage. Git User's Manual - HTML documentation for Git is located at /usr/share/doc/git-1.7.1/user-manual.html . Online Documentation Pro Git - The online version of the Pro Git book provides a detailed description of Git , its concepts and its usage.
[ "~]# yum install git", "~]USD git config --global core.editor vim", "~]USD git config --global user.name \"John Doe\" ~]USD git config --global user.email \"[email protected]\"" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/developer_guide/collaborating
Operators
Operators OpenShift Container Platform 4.16 Working with Operators in OpenShift Container Platform Red Hat OpenShift Documentation Team
[ "etcd ├── manifests │ ├── etcdcluster.crd.yaml │ └── etcdoperator.clusterserviceversion.yaml │ └── secret.yaml │ └── configmap.yaml └── metadata └── annotations.yaml └── dependencies.yaml", "annotations: operators.operatorframework.io.bundle.mediatype.v1: \"registry+v1\" 1 operators.operatorframework.io.bundle.manifests.v1: \"manifests/\" 2 operators.operatorframework.io.bundle.metadata.v1: \"metadata/\" 3 operators.operatorframework.io.bundle.package.v1: \"test-operator\" 4 operators.operatorframework.io.bundle.channels.v1: \"beta,stable\" 5 operators.operatorframework.io.bundle.channel.default.v1: \"stable\" 6", "dependencies: - type: olm.package value: packageName: prometheus version: \">0.27.0\" - type: olm.gvk value: group: etcd.database.coreos.com kind: EtcdCluster version: v1beta2", "Ignore everything except non-object .json and .yaml files **/* !*.json !*.yaml **/objects/*.json **/objects/*.yaml", "catalog ├── packageA │ └── index.yaml ├── packageB │ ├── .indexignore │ ├── index.yaml │ └── objects │ └── packageB.v0.1.0.clusterserviceversion.yaml └── packageC └── index.json └── deprecations.yaml", "_Meta: { // schema is required and must be a non-empty string schema: string & !=\"\" // package is optional, but if it's defined, it must be a non-empty string package?: string & !=\"\" // properties is optional, but if it's defined, it must be a list of 0 or more properties properties?: [... #Property] } #Property: { // type is required type: string & !=\"\" // value is required, and it must not be null value: !=null }", "#Package: { schema: \"olm.package\" // Package name name: string & !=\"\" // A description of the package description?: string // The package's default channel defaultChannel: string & !=\"\" // An optional icon icon?: { base64data: string mediatype: string } }", "#Channel: { schema: \"olm.channel\" package: string & !=\"\" name: string & !=\"\" entries: [...#ChannelEntry] } #ChannelEntry: { // name is required. It is the name of an `olm.bundle` that // is present in the channel. name: string & !=\"\" // replaces is optional. It is the name of bundle that is replaced // by this entry. It does not have to be present in the entry list. replaces?: string & !=\"\" // skips is optional. It is a list of bundle names that are skipped by // this entry. The skipped bundles do not have to be present in the // entry list. skips?: [...string & !=\"\"] // skipRange is optional. It is the semver range of bundle versions // that are skipped by this entry. skipRange?: string & !=\"\" }", "#Bundle: { schema: \"olm.bundle\" package: string & !=\"\" name: string & !=\"\" image: string & !=\"\" properties: [...#Property] relatedImages?: [...#RelatedImage] } #Property: { // type is required type: string & !=\"\" // value is required, and it must not be null value: !=null } #RelatedImage: { // image is the image reference image: string & !=\"\" // name is an optional descriptive name for an image that // helps identify its purpose in the context of the bundle name?: string & !=\"\" }", "schema: olm.deprecations package: my-operator 1 entries: - reference: schema: olm.package 2 message: | 3 The 'my-operator' package is end of life. Please use the 'my-operator-new' package for support. - reference: schema: olm.channel name: alpha 4 message: | The 'alpha' channel is no longer supported. Please switch to the 'stable' channel. - reference: schema: olm.bundle name: my-operator.v1.68.0 5 message: | my-operator.v1.68.0 is deprecated. Uninstall my-operator.v1.68.0 and install my-operator.v1.72.0 for support.", "my-catalog └── my-operator ├── index.yaml └── deprecations.yaml", "#PropertyPackage: { type: \"olm.package\" value: { packageName: string & !=\"\" version: string & !=\"\" } }", "#PropertyGVK: { type: \"olm.gvk\" value: { group: string & !=\"\" version: string & !=\"\" kind: string & !=\"\" } }", "#PropertyPackageRequired: { type: \"olm.package.required\" value: { packageName: string & !=\"\" versionRange: string & !=\"\" } }", "#PropertyGVKRequired: { type: \"olm.gvk.required\" value: { group: string & !=\"\" version: string & !=\"\" kind: string & !=\"\" } }", "name: community-operators repo: quay.io/community-operators/catalog tag: latest references: - name: etcd-operator image: quay.io/etcd-operator/index@sha256:5891b5b522d5df086d0ff0b110fbd9d21bb4fc7163af34d08286a2e846f6be03 - name: prometheus-operator image: quay.io/prometheus-operator/index@sha256:e258d248fda94c63753607f7c4494ee0fcbe92f1a76bfdac795c9d84101eb317", "name=USD(yq eval '.name' catalog.yaml) mkdir \"USDname\" yq eval '.name + \"/\" + .references[].name' catalog.yaml | xargs mkdir for l in USD(yq e '.name as USDcatalog | .references[] | .image + \"|\" + USDcatalog + \"/\" + .name + \"/index.yaml\"' catalog.yaml); do image=USD(echo USDl | cut -d'|' -f1) file=USD(echo USDl | cut -d'|' -f2) opm render \"USDimage\" > \"USDfile\" done opm generate dockerfile \"USDname\" indexImage=USD(yq eval '.repo + \":\" + .tag' catalog.yaml) docker build -t \"USDindexImage\" -f \"USDname.Dockerfile\" . docker push \"USDindexImage\"", "\\ufeffapiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: generation: 1 name: example-catalog 1 namespace: openshift-marketplace 2 annotations: olm.catalogImageTemplate: 3 \"quay.io/example-org/example-catalog:v{kube_major_version}.{kube_minor_version}.{kube_patch_version}\" spec: displayName: Example Catalog 4 image: quay.io/example-org/example-catalog:v1 5 priority: -400 6 publisher: Example Org sourceType: grpc 7 grpcPodConfig: securityContextConfig: <security_mode> 8 nodeSelector: 9 custom_label: <label> priorityClassName: system-cluster-critical 10 tolerations: 11 - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoSchedule\" updateStrategy: registryPoll: 12 interval: 30m0s status: connectionState: address: example-catalog.openshift-marketplace.svc:50051 lastConnect: 2021-08-26T18:14:31Z lastObservedState: READY 13 latestImageRegistryPoll: 2021-08-26T18:46:25Z 14 registryService: 15 createdAt: 2021-08-26T16:16:37Z port: 50051 protocol: grpc serviceName: example-catalog serviceNamespace: openshift-marketplace", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: example-operator namespace: example-namespace spec: channel: stable name: example-operator source: example-catalog sourceNamespace: openshift-marketplace", "registry.redhat.io/redhat/redhat-operator-index:v4.15", "registry.redhat.io/redhat/redhat-operator-index:v4.16", "apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: generation: 1 name: example-catalog namespace: openshift-marketplace annotations: olm.catalogImageTemplate: \"quay.io/example-org/example-catalog:v{kube_major_version}.{kube_minor_version}\" spec: displayName: Example Catalog image: quay.io/example-org/example-catalog:v1.29 priority: -400 publisher: Example Org", "quay.io/example-org/example-catalog:v1.29", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: example-operator namespace: example-namespace spec: channel: stable name: example-operator source: example-catalog sourceNamespace: openshift-marketplace", "apiVersion: operators.coreos.com/v1alpha1 kind: InstallPlan metadata: name: install-abcde namespace: operators spec: approval: Automatic approved: true clusterServiceVersionNames: - my-operator.v1.0.1 generation: 1 status: catalogSources: [] conditions: - lastTransitionTime: '2021-01-01T20:17:27Z' lastUpdateTime: '2021-01-01T20:17:27Z' status: 'True' type: Installed phase: Complete plan: - resolving: my-operator.v1.0.1 resource: group: operators.coreos.com kind: ClusterServiceVersion manifest: >- name: my-operator.v1.0.1 sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1alpha1 status: Created - resolving: my-operator.v1.0.1 resource: group: apiextensions.k8s.io kind: CustomResourceDefinition manifest: >- name: webservers.web.servers.org sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1beta1 status: Created - resolving: my-operator.v1.0.1 resource: group: '' kind: ServiceAccount manifest: >- name: my-operator sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1 status: Created - resolving: my-operator.v1.0.1 resource: group: rbac.authorization.k8s.io kind: Role manifest: >- name: my-operator.v1.0.1-my-operator-6d7cbc6f57 sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1 status: Created - resolving: my-operator.v1.0.1 resource: group: rbac.authorization.k8s.io kind: RoleBinding manifest: >- name: my-operator.v1.0.1-my-operator-6d7cbc6f57 sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1 status: Created", "packageName: example channels: - name: alpha currentCSV: example.v0.1.2 - name: beta currentCSV: example.v0.1.3 defaultChannel: alpha", "apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: etcdoperator.v0.9.2 namespace: placeholder annotations: spec: displayName: etcd description: Etcd Operator replaces: etcdoperator.v0.9.0 skips: - etcdoperator.v0.9.1", "olm.skipRange: <semver_range>", "apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: elasticsearch-operator.v4.1.2 namespace: <namespace> annotations: olm.skipRange: '>=4.1.0 <4.1.2'", "properties: - type: olm.kubeversion value: version: \"1.16.0\"", "properties: - property: type: color value: red - property: type: shape value: square - property: type: olm.gvk value: group: olm.coreos.io version: v1alpha1 kind: myresource", "dependencies: - type: olm.package value: packageName: prometheus version: \">0.27.0\" - type: olm.gvk value: group: etcd.database.coreos.com kind: EtcdCluster version: v1beta2", "type: olm.constraint value: failureMessage: 'require to have \"certified\"' cel: rule: 'properties.exists(p, p.type == \"certified\")'", "type: olm.constraint value: failureMessage: 'require to have \"certified\" and \"stable\" properties' cel: rule: 'properties.exists(p, p.type == \"certified\") && properties.exists(p, p.type == \"stable\")'", "schema: olm.bundle name: red.v1.0.0 properties: - type: olm.constraint value: failureMessage: All are required for Red because all: constraints: - failureMessage: Package blue is needed for package: name: blue versionRange: '>=1.0.0' - failureMessage: GVK Green/v1 is needed for gvk: group: greens.example.com version: v1 kind: Green", "schema: olm.bundle name: red.v1.0.0 properties: - type: olm.constraint value: failureMessage: Any are required for Red because any: constraints: - gvk: group: blues.example.com version: v1beta1 kind: Blue - gvk: group: blues.example.com version: v1beta2 kind: Blue - gvk: group: blues.example.com version: v1 kind: Blue", "schema: olm.bundle name: red.v1.0.0 properties: - type: olm.constraint value: all: constraints: - failureMessage: Package blue is needed for package: name: blue versionRange: '>=1.0.0' - failureMessage: Cannot be required for Red because not: constraints: - gvk: group: greens.example.com version: v1alpha1 kind: greens", "schema: olm.bundle name: red.v1.0.0 properties: - type: olm.constraint value: failureMessage: Required for Red because any: constraints: - all: constraints: - package: name: blue versionRange: '>=1.0.0' - gvk: group: blues.example.com version: v1 kind: Blue - all: constraints: - package: name: blue versionRange: '<1.0.0' - gvk: group: blues.example.com version: v1beta1 kind: Blue", "apiVersion: \"operators.coreos.com/v1alpha1\" kind: \"CatalogSource\" metadata: name: \"my-operators\" namespace: \"operators\" spec: sourceType: grpc grpcPodConfig: securityContextConfig: <security_mode> 1 image: example.com/my/operator-index:v1 displayName: \"My Operators\" priority: 100", "dependencies: - type: olm.package value: packageName: etcd version: \">3.1.0\" - type: olm.gvk value: group: etcd.database.coreos.com kind: EtcdCluster version: v1beta2", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-group namespace: my-namespace spec: targetNamespaces: - my-namespace", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-group namespace: my-namespace spec: selector: cool.io/prod: \"true\"", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-group namespace: my-namespace", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: annotations: olm.providedAPIs: PackageManifest.v1alpha1.packages.apps.redhat.com name: olm-operators namespace: local spec: selector: {} serviceAccountName: metadata: creationTimestamp: null targetNamespaces: - local status: lastUpdated: 2019-02-19T16:18:28Z namespaces: - local", "cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OLMConfig metadata: name: cluster spec: features: disableCopiedCSVs: false EOF", "cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OLMConfig metadata: name: cluster spec: features: disableCopiedCSVs: true EOF", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-monitoring namespace: cluster-monitoring annotations: olm.providedAPIs: Alertmanager.v1.monitoring.coreos.com,Prometheus.v1.monitoring.coreos.com,PrometheusRule.v1.monitoring.coreos.com,ServiceMonitor.v1.monitoring.coreos.com spec: staticProvidedAPIs: true selector: matchLabels: something.cool.io/cluster-monitoring: \"true\"", "attenuated service account query failed - more than one operator group(s) are managing this namespace count=2", "apiVersion: operators.coreos.com/v1 kind: OperatorCondition metadata: name: my-operator namespace: operators spec: conditions: - type: Upgradeable 1 status: \"False\" 2 reason: \"migration\" message: \"The Operator is performing a migration.\" lastTransitionTime: \"2020-08-24T23:15:55Z\"", "apiVersion: config.openshift.io/v1 kind: OperatorHub metadata: name: cluster spec: disableAllDefaultSources: true 1 sources: [ 2 { name: \"community-operators\", disabled: false } ]", "registry.redhat.io/redhat/redhat-operator-index:v4.8", "registry.redhat.io/redhat/redhat-operator-index:v4.9", "apiVersion: apiextensions.k8s.io/v1 1 kind: CustomResourceDefinition metadata: name: crontabs.stable.example.com 2 spec: group: stable.example.com 3 versions: - name: v1 4 served: true storage: true schema: openAPIV3Schema: type: object properties: spec: type: object properties: cronSpec: type: string image: type: string replicas: type: integer scope: Namespaced 5 names: plural: crontabs 6 singular: crontab 7 kind: CronTab 8 shortNames: - ct 9", "oc create -f <file_name>.yaml", "/apis/<spec:group>/<spec:version>/<scope>/*/<names-plural>/", "/apis/stable.example.com/v1/namespaces/*/crontabs/", "kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 1 metadata: name: aggregate-cron-tabs-admin-edit 2 labels: rbac.authorization.k8s.io/aggregate-to-admin: \"true\" 3 rbac.authorization.k8s.io/aggregate-to-edit: \"true\" 4 rules: - apiGroups: [\"stable.example.com\"] 5 resources: [\"crontabs\"] 6 verbs: [\"get\", \"list\", \"watch\", \"create\", \"update\", \"patch\", \"delete\", \"deletecollection\"] 7 --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: aggregate-cron-tabs-view 8 labels: # Add these permissions to the \"view\" default role. rbac.authorization.k8s.io/aggregate-to-view: \"true\" 9 rbac.authorization.k8s.io/aggregate-to-cluster-reader: \"true\" 10 rules: - apiGroups: [\"stable.example.com\"] 11 resources: [\"crontabs\"] 12 verbs: [\"get\", \"list\", \"watch\"] 13", "oc create -f <file_name>.yaml", "apiVersion: \"stable.example.com/v1\" 1 kind: CronTab 2 metadata: name: my-new-cron-object 3 finalizers: 4 - finalizer.stable.example.com spec: 5 cronSpec: \"* * * * /5\" image: my-awesome-cron-image", "oc create -f <file_name>.yaml", "oc get <kind>", "oc get crontab", "NAME KIND my-new-cron-object CronTab.v1.stable.example.com", "oc get crontabs", "oc get crontab", "oc get ct", "oc get <kind> -o yaml", "oc get ct -o yaml", "apiVersion: v1 items: - apiVersion: stable.example.com/v1 kind: CronTab metadata: clusterName: \"\" creationTimestamp: 2017-05-31T12:56:35Z deletionGracePeriodSeconds: null deletionTimestamp: null name: my-new-cron-object namespace: default resourceVersion: \"285\" selfLink: /apis/stable.example.com/v1/namespaces/default/crontabs/my-new-cron-object uid: 9423255b-4600-11e7-af6a-28d2447dc82b spec: cronSpec: '* * * * /5' 1 image: my-awesome-cron-image 2", "apiVersion: \"stable.example.com/v1\" 1 kind: CronTab 2 metadata: name: my-new-cron-object 3 finalizers: 4 - finalizer.stable.example.com spec: 5 cronSpec: \"* * * * /5\" image: my-awesome-cron-image", "oc create -f <file_name>.yaml", "oc get <kind>", "oc get crontab", "NAME KIND my-new-cron-object CronTab.v1.stable.example.com", "oc get crontabs", "oc get crontab", "oc get ct", "oc get <kind> -o yaml", "oc get ct -o yaml", "apiVersion: v1 items: - apiVersion: stable.example.com/v1 kind: CronTab metadata: clusterName: \"\" creationTimestamp: 2017-05-31T12:56:35Z deletionGracePeriodSeconds: null deletionTimestamp: null name: my-new-cron-object namespace: default resourceVersion: \"285\" selfLink: /apis/stable.example.com/v1/namespaces/default/crontabs/my-new-cron-object uid: 9423255b-4600-11e7-af6a-28d2447dc82b spec: cronSpec: '* * * * /5' 1 image: my-awesome-cron-image 2", "oc get csv", "oc policy add-role-to-user edit <user> -n <target_project>", "oc get packagemanifests -n openshift-marketplace", "NAME CATALOG AGE 3scale-operator Red Hat Operators 91m advanced-cluster-management Red Hat Operators 91m amq7-cert-manager Red Hat Operators 91m couchbase-enterprise-certified Certified Operators 91m crunchy-postgres-operator Certified Operators 91m mongodb-enterprise Certified Operators 91m etcd Community Operators 91m jaeger Community Operators 91m kubefed Community Operators 91m", "oc describe packagemanifests <operator_name> -n openshift-marketplace", "Kind: PackageManifest Install Modes: 1 Supported: true Type: OwnNamespace Supported: true Type: SingleNamespace Supported: false Type: MultiNamespace Supported: true Type: AllNamespaces Entries: Name: example-operator.v3.7.11 Version: 3.7.11 Name: example-operator.v3.7.10 Version: 3.7.10 Name: stable-3.7 2 Entries: Name: example-operator.v3.8.5 Version: 3.8.5 Name: example-operator.v3.8.4 Version: 3.8.4 Name: stable-3.8 3 Default Channel: stable-3.8 4", "oc get packagemanifests <operator_name> -n <catalog_namespace> -o yaml", "oc get packagemanifest --selector=catalog=<catalogsource_name> --field-selector metadata.name=<operator_name> -n <catalog_namespace> -o yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <operatorgroup_name> namespace: <namespace> 1 spec: targetNamespaces: - <namespace> 2", "oc apply -f operatorgroup.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: <subscription_name> namespace: <namespace_per_install_mode> 1 spec: channel: <channel_name> 2 name: <operator_name> 3 source: <catalog_name> 4 sourceNamespace: <catalog_source_namespace> 5 config: env: 6 - name: ARGS value: \"-v=10\" envFrom: 7 - secretRef: name: license-secret volumes: 8 - name: <volume_name> configMap: name: <configmap_name> volumeMounts: 9 - mountPath: <directory_name> name: <volume_name> tolerations: 10 - operator: \"Exists\" resources: 11 requests: memory: \"64Mi\" cpu: \"250m\" limits: memory: \"128Mi\" cpu: \"500m\" nodeSelector: 12 foo: bar", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: example-operator namespace: example-operator spec: channel: stable-3.7 installPlanApproval: Manual 1 name: example-operator source: custom-operators sourceNamespace: openshift-marketplace startingCSV: example-operator.v3.7.10 2", "kind: Subscription spec: installPlanApproval: Manual 1", "kind: Subscription spec: config: env: - name: ROLEARN value: \"<role_arn>\" 1", "kind: Subscription spec: config: env: - name: CLIENTID value: \"<client_id>\" 1 - name: TENANTID value: \"<tenant_id>\" 2 - name: SUBSCRIPTIONID value: \"<subscription_id>\" 3", "oc apply -f subscription.yaml", "oc describe subscription <subscription_name> -n <namespace>", "oc describe operatorgroup <operatorgroup_name> -n <namespace>", "oc get packagemanifests -n openshift-marketplace", "NAME CATALOG AGE 3scale-operator Red Hat Operators 91m advanced-cluster-management Red Hat Operators 91m amq7-cert-manager Red Hat Operators 91m couchbase-enterprise-certified Certified Operators 91m crunchy-postgres-operator Certified Operators 91m mongodb-enterprise Certified Operators 91m etcd Community Operators 91m jaeger Community Operators 91m kubefed Community Operators 91m", "oc describe packagemanifests <operator_name> -n openshift-marketplace", "Kind: PackageManifest Install Modes: 1 Supported: true Type: OwnNamespace Supported: true Type: SingleNamespace Supported: false Type: MultiNamespace Supported: true Type: AllNamespaces Entries: Name: example-operator.v3.7.11 Version: 3.7.11 Name: example-operator.v3.7.10 Version: 3.7.10 Name: stable-3.7 2 Entries: Name: example-operator.v3.8.5 Version: 3.8.5 Name: example-operator.v3.8.4 Version: 3.8.4 Name: stable-3.8 3 Default Channel: stable-3.8 4", "oc get packagemanifests <operator_name> -n <catalog_namespace> -o yaml", "oc get packagemanifest --selector=catalog=<catalogsource_name> --field-selector metadata.name=<operator_name> -n <catalog_namespace> -o yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <operatorgroup_name> namespace: <namespace> 1 spec: targetNamespaces: - <namespace> 2", "oc apply -f operatorgroup.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: <subscription_name> namespace: <namespace_per_install_mode> 1 spec: channel: <channel_name> 2 name: <operator_name> 3 source: <catalog_name> 4 sourceNamespace: <catalog_source_namespace> 5 config: env: 6 - name: ARGS value: \"-v=10\" envFrom: 7 - secretRef: name: license-secret volumes: 8 - name: <volume_name> configMap: name: <configmap_name> volumeMounts: 9 - mountPath: <directory_name> name: <volume_name> tolerations: 10 - operator: \"Exists\" resources: 11 requests: memory: \"64Mi\" cpu: \"250m\" limits: memory: \"128Mi\" cpu: \"500m\" nodeSelector: 12 foo: bar", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: example-operator namespace: example-operator spec: channel: stable-3.7 installPlanApproval: Manual 1 name: example-operator source: custom-operators sourceNamespace: openshift-marketplace startingCSV: example-operator.v3.7.10 2", "kind: Subscription spec: installPlanApproval: Manual 1", "kind: Subscription spec: config: env: - name: ROLEARN value: \"<role_arn>\" 1", "kind: Subscription spec: config: env: - name: CLIENTID value: \"<client_id>\" 1 - name: TENANTID value: \"<tenant_id>\" 2 - name: SUBSCRIPTIONID value: \"<subscription_id>\" 3", "oc apply -f subscription.yaml", "oc describe subscription <subscription_name> -n <namespace>", "oc describe operatorgroup <operatorgroup_name> -n <namespace>", "apiVersion: v1 kind: Namespace metadata: name: team1-operator", "oc create -f team1-operator.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: team1-operatorgroup namespace: team1-operator spec: targetNamespaces: - team1 1", "oc create -f team1-operatorgroup.yaml", "apiVersion: v1 kind: Namespace metadata: name: global-operators", "oc create -f global-operators.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: global-operatorgroup namespace: global-operators", "oc create -f global-operatorgroup.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: nodeAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-163-94.us-west-2.compute.internal #", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: nodeAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/arch operator: In values: - arm64 - key: kubernetes.io/os operator: In values: - linux #", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: podAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - test topologyKey: kubernetes.io/hostname #", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: podAntiAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: cpu operator: In values: - high topologyKey: kubernetes.io/hostname #", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: 1 nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-185-229.ec2.internal #", "oc get pods -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES custom-metrics-autoscaler-operator-5dcc45d656-bhshg 1/1 Running 0 50s 10.131.0.20 ip-10-0-185-229.ec2.internal <none> <none>", "oc get subscription.operators.coreos.com serverless-operator -n openshift-serverless -o yaml | grep currentCSV", "currentCSV: serverless-operator.v1.28.0", "oc delete subscription.operators.coreos.com serverless-operator -n openshift-serverless", "subscription.operators.coreos.com \"serverless-operator\" deleted", "oc delete clusterserviceversion serverless-operator.v1.28.0 -n openshift-serverless", "clusterserviceversion.operators.coreos.com \"serverless-operator.v1.28.0\" deleted", "ImagePullBackOff for Back-off pulling image \"example.com/openshift4/ose-elasticsearch-operator-bundle@sha256:6d2587129c846ec28d384540322b40b05833e7e00b25cca584e004af9a1d292e\"", "rpc error: code = Unknown desc = error pinging docker registry example.com: Get \"https://example.com/v2/\": dial tcp: lookup example.com on 10.0.0.1:53: no such host", "oc get sub,csv -n <namespace>", "NAME PACKAGE SOURCE CHANNEL subscription.operators.coreos.com/elasticsearch-operator elasticsearch-operator redhat-operators 5.0 NAME DISPLAY VERSION REPLACES PHASE clusterserviceversion.operators.coreos.com/elasticsearch-operator.5.0.0-65 OpenShift Elasticsearch Operator 5.0.0-65 Succeeded", "oc delete subscription <subscription_name> -n <namespace>", "oc delete csv <csv_name> -n <namespace>", "oc get job,configmap -n openshift-marketplace", "NAME COMPLETIONS DURATION AGE job.batch/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 1/1 26s 9m30s NAME DATA AGE configmap/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 3 9m30s", "oc delete job <job_name> -n openshift-marketplace", "oc delete configmap <configmap_name> -n openshift-marketplace", "oc get sub,csv,installplan -n <namespace>", "oc get csvs -n openshift", "oc apply -f - <<EOF apiVersion: operators.coreos.com/v1 kind: OLMConfig metadata: name: cluster spec: features: disableCopiedCSVs: true 1 EOF", "oc get events", "LAST SEEN TYPE REASON OBJECT MESSAGE 85s Warning DisabledCopiedCSVs clusterserviceversion/my-csv.v1.0.0 CSV copying disabled for operators/my-csv.v1.0.0", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: etcd-config-test namespace: openshift-operators spec: config: env: - name: HTTP_PROXY value: test_http - name: HTTPS_PROXY value: test_https - name: NO_PROXY value: test channel: clusterwide-alpha installPlanApproval: Automatic name: etcd source: community-operators sourceNamespace: openshift-marketplace startingCSV: etcdoperator.v0.9.4-clusterwide", "oc get deployment -n openshift-operators etcd-operator -o yaml | grep -i \"PROXY\" -A 2", "- name: HTTP_PROXY value: test_http - name: HTTPS_PROXY value: test_https - name: NO_PROXY value: test image: quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21088a98b93838e284a6086b13917f96b0d9c", "apiVersion: v1 kind: ConfigMap metadata: name: trusted-ca 1 labels: config.openshift.io/inject-trusted-cabundle: \"true\" 2", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: my-operator spec: package: etcd channel: alpha config: 1 selector: matchLabels: <labels_for_pods> 2 volumes: 3 - name: trusted-ca configMap: name: trusted-ca items: - key: ca-bundle.crt 4 path: tls-ca-bundle.pem 5 volumeMounts: 6 - name: trusted-ca mountPath: /etc/pki/ca-trust/extracted/pem readOnly: true", "oc get subs -n <operator_namespace>", "oc describe sub <subscription_name> -n <operator_namespace>", "Name: cluster-logging Namespace: openshift-logging Labels: operators.coreos.com/cluster-logging.openshift-logging= Annotations: <none> API Version: operators.coreos.com/v1alpha1 Kind: Subscription Conditions: Last Transition Time: 2019-07-29T13:42:57Z Message: all available catalogsources are healthy Reason: AllCatalogSourcesHealthy Status: False Type: CatalogSourcesUnhealthy", "oc get catalogsources -n openshift-marketplace", "NAME DISPLAY TYPE PUBLISHER AGE certified-operators Certified Operators grpc Red Hat 55m community-operators Community Operators grpc Red Hat 55m example-catalog Example Catalog grpc Example Org 2m25s redhat-marketplace Red Hat Marketplace grpc Red Hat 55m redhat-operators Red Hat Operators grpc Red Hat 55m", "oc describe catalogsource example-catalog -n openshift-marketplace", "Name: example-catalog Namespace: openshift-marketplace Labels: <none> Annotations: operatorframework.io/managed-by: marketplace-operator target.workload.openshift.io/management: {\"effect\": \"PreferredDuringScheduling\"} API Version: operators.coreos.com/v1alpha1 Kind: CatalogSource Status: Connection State: Address: example-catalog.openshift-marketplace.svc:50051 Last Connect: 2021-09-09T17:07:35Z Last Observed State: TRANSIENT_FAILURE Registry Service: Created At: 2021-09-09T17:05:45Z Port: 50051 Protocol: grpc Service Name: example-catalog Service Namespace: openshift-marketplace", "oc get pods -n openshift-marketplace", "NAME READY STATUS RESTARTS AGE certified-operators-cv9nn 1/1 Running 0 36m community-operators-6v8lp 1/1 Running 0 36m marketplace-operator-86bfc75f9b-jkgbc 1/1 Running 0 42m example-catalog-bwt8z 0/1 ImagePullBackOff 0 3m55s redhat-marketplace-57p8c 1/1 Running 0 36m redhat-operators-smxx8 1/1 Running 0 36m", "oc describe pod example-catalog-bwt8z -n openshift-marketplace", "Name: example-catalog-bwt8z Namespace: openshift-marketplace Priority: 0 Node: ci-ln-jyryyg2-f76d1-ggdbq-worker-b-vsxjd/10.0.128.2 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 48s default-scheduler Successfully assigned openshift-marketplace/example-catalog-bwt8z to ci-ln-jyryyf2-f76d1-fgdbq-worker-b-vsxjd Normal AddedInterface 47s multus Add eth0 [10.131.0.40/23] from openshift-sdn Normal BackOff 20s (x2 over 46s) kubelet Back-off pulling image \"quay.io/example-org/example-catalog:v1\" Warning Failed 20s (x2 over 46s) kubelet Error: ImagePullBackOff Normal Pulling 8s (x3 over 47s) kubelet Pulling image \"quay.io/example-org/example-catalog:v1\" Warning Failed 8s (x3 over 47s) kubelet Failed to pull image \"quay.io/example-org/example-catalog:v1\": rpc error: code = Unknown desc = reading manifest v1 in quay.io/example-org/example-catalog: unauthorized: access to the requested resource is not authorized Warning Failed 8s (x3 over 47s) kubelet Error: ErrImagePull", "oc edit operatorcondition <name>", "apiVersion: operators.coreos.com/v2 kind: OperatorCondition metadata: name: my-operator namespace: operators spec: overrides: - type: Upgradeable 1 status: \"True\" reason: \"upgradeIsSafe\" message: \"This is a known issue with the Operator where it always reports that it cannot be upgraded.\" conditions: - type: Upgradeable status: \"False\" reason: \"migration\" message: \"The operator is performing a migration.\" lastTransitionTime: \"2020-08-24T23:15:55Z\"", "cat <<EOF | oc create -f - apiVersion: v1 kind: Namespace metadata: name: scoped EOF", "cat <<EOF | oc create -f - apiVersion: v1 kind: ServiceAccount metadata: name: scoped namespace: scoped EOF", "cat <<EOF | oc create -f - apiVersion: v1 kind: Secret type: kubernetes.io/service-account-token 1 metadata: name: scoped namespace: scoped annotations: kubernetes.io/service-account.name: scoped EOF", "cat <<EOF | oc create -f - apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: scoped namespace: scoped rules: - apiGroups: [\"*\"] resources: [\"*\"] verbs: [\"*\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: scoped-bindings namespace: scoped roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: scoped subjects: - kind: ServiceAccount name: scoped namespace: scoped EOF", "cat <<EOF | oc create -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: scoped namespace: scoped spec: serviceAccountName: scoped 1 targetNamespaces: - scoped EOF", "cat <<EOF | oc create -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-cert-manager-operator namespace: scoped spec: channel: stable-v1 name: openshift-cert-manager-operator source: <catalog_source_name> 1 sourceNamespace: <catalog_source_namespace> 2 EOF", "kind: Role rules: - apiGroups: [\"operators.coreos.com\"] resources: [\"subscriptions\", \"clusterserviceversions\"] verbs: [\"get\", \"create\", \"update\", \"patch\"] - apiGroups: [\"\"] resources: [\"services\", \"serviceaccounts\"] verbs: [\"get\", \"create\", \"update\", \"patch\"] - apiGroups: [\"rbac.authorization.k8s.io\"] resources: [\"roles\", \"rolebindings\"] verbs: [\"get\", \"create\", \"update\", \"patch\"] - apiGroups: [\"apps\"] 1 resources: [\"deployments\"] verbs: [\"list\", \"watch\", \"get\", \"create\", \"update\", \"patch\", \"delete\"] - apiGroups: [\"\"] 2 resources: [\"pods\"] verbs: [\"list\", \"watch\", \"get\", \"create\", \"update\", \"patch\", \"delete\"]", "kind: ClusterRole 1 rules: - apiGroups: [\"\"] resources: [\"secrets\"] verbs: [\"get\"] --- kind: Role rules: - apiGroups: [\"\"] resources: [\"secrets\"] verbs: [\"create\", \"update\", \"patch\"]", "apiVersion: operators.coreos.com/v1 kind: Subscription metadata: name: etcd namespace: scoped status: installPlanRef: apiVersion: operators.coreos.com/v1 kind: InstallPlan name: install-4plp8 namespace: scoped resourceVersion: \"117359\" uid: 2c1df80e-afea-11e9-bce3-5254009c9c23", "apiVersion: operators.coreos.com/v1 kind: InstallPlan status: conditions: - lastTransitionTime: \"2019-07-26T21:13:10Z\" lastUpdateTime: \"2019-07-26T21:13:10Z\" message: 'error creating clusterrole etcdoperator.v0.9.4-clusterwide-dsfx4: clusterroles.rbac.authorization.k8s.io is forbidden: User \"system:serviceaccount:scoped:scoped\" cannot create resource \"clusterroles\" in API group \"rbac.authorization.k8s.io\" at the cluster scope' reason: InstallComponentFailed status: \"False\" type: Installed phase: Failed", "mkdir <catalog_dir>", "opm generate dockerfile <catalog_dir> -i registry.redhat.io/openshift4/ose-operator-registry-rhel9:v4.16 1", ". 1 ├── <catalog_dir> 2 └── <catalog_dir>.Dockerfile 3", "opm init <operator_name> \\ 1 --default-channel=preview \\ 2 --description=./README.md \\ 3 --icon=./operator-icon.svg \\ 4 --output yaml \\ 5 > <catalog_dir>/index.yaml 6", "opm render <registry>/<namespace>/<bundle_image_name>:<tag> \\ 1 --output=yaml >> <catalog_dir>/index.yaml 2", "--- schema: olm.channel package: <operator_name> name: preview entries: - name: <operator_name>.v0.1.0 1", "opm validate <catalog_dir>", "echo USD?", "0", "podman build . -f <catalog_dir>.Dockerfile -t <registry>/<namespace>/<catalog_image_name>:<tag>", "podman login <registry>", "podman push <registry>/<namespace>/<catalog_image_name>:<tag>", "opm render <registry>/<namespace>/<catalog_image_name>:<tag> -o yaml > <catalog_dir>/index.yaml", "--- defaultChannel: release-2.7 icon: base64data: <base64_string> mediatype: image/svg+xml name: example-operator schema: olm.package --- entries: - name: example-operator.v2.7.0 skipRange: '>=2.6.0 <2.7.0' - name: example-operator.v2.7.1 replaces: example-operator.v2.7.0 skipRange: '>=2.6.0 <2.7.1' - name: example-operator.v2.7.2 replaces: example-operator.v2.7.1 skipRange: '>=2.6.0 <2.7.2' - name: example-operator.v2.7.3 replaces: example-operator.v2.7.2 skipRange: '>=2.6.0 <2.7.3' - name: example-operator.v2.7.4 replaces: example-operator.v2.7.3 skipRange: '>=2.6.0 <2.7.4' name: release-2.7 package: example-operator schema: olm.channel --- image: example.com/example-inc/example-operator-bundle@sha256:<digest> name: example-operator.v2.7.0 package: example-operator properties: - type: olm.gvk value: group: example-group.example.io kind: MyObject version: v1alpha1 - type: olm.gvk value: group: example-group.example.io kind: MyOtherObject version: v1beta1 - type: olm.package value: packageName: example-operator version: 2.7.0 - type: olm.bundle.object value: data: <base64_string> - type: olm.bundle.object value: data: <base64_string> relatedImages: - image: example.com/example-inc/example-related-image@sha256:<digest> name: example-related-image schema: olm.bundle ---", "opm validate <catalog_dir>", "podman build . -f <catalog_dir>.Dockerfile -t <registry>/<namespace>/<catalog_image_name>:<tag>", "podman push <registry>/<namespace>/<catalog_image_name>:<tag>", "opm index add --bundles <registry>/<namespace>/<bundle_image_name>:<tag> \\ 1 --tag <registry>/<namespace>/<index_image_name>:<tag> \\ 2 [--binary-image <registry_base_image>] 3", "podman login <registry>", "podman push <registry>/<namespace>/<index_image_name>:<tag>", "opm index add --bundles <registry>/<namespace>/<new_bundle_image>@sha256:<digest> \\ 1 --from-index <registry>/<namespace>/<existing_index_image>:<existing_tag> \\ 2 --tag <registry>/<namespace>/<existing_index_image>:<updated_tag> \\ 3 --pull-tool podman 4", "opm index add --bundles quay.io/ocs-dev/ocs-operator@sha256:c7f11097a628f092d8bad148406aa0e0951094a03445fd4bc0775431ef683a41 --from-index mirror.example.com/abc/abc-redhat-operator-index:4.16 --tag mirror.example.com/abc/abc-redhat-operator-index:4.16.1 --pull-tool podman", "podman push <registry>/<namespace>/<existing_index_image>:<updated_tag>", "oc get packagemanifests -n openshift-marketplace", "podman login <target_registry>", "podman run -p50051:50051 -it registry.redhat.io/redhat/redhat-operator-index:v4.16", "Trying to pull registry.redhat.io/redhat/redhat-operator-index:v4.16 Getting image source signatures Copying blob ae8a0c23f5b1 done INFO[0000] serving registry database=/database/index.db port=50051", "grpcurl -plaintext localhost:50051 api.Registry/ListPackages > packages.out", "{ \"name\": \"advanced-cluster-management\" } { \"name\": \"jaeger-product\" } { { \"name\": \"quay-operator\" }", "opm index prune -f registry.redhat.io/redhat/redhat-operator-index:v4.16 \\ 1 -p advanced-cluster-management,jaeger-product,quay-operator \\ 2 [-i registry.redhat.io/openshift4/ose-operator-registry-rhel9:v4.16] \\ 3 -t <target_registry>:<port>/<namespace>/redhat-operator-index:v4.16 4", "podman push <target_registry>:<port>/<namespace>/redhat-operator-index:v4.16", "opm migrate <registry_image> <fbc_directory>", "opm generate dockerfile <fbc_directory> --binary-image registry.redhat.io/openshift4/ose-operator-registry-rhel9:v4.16", "opm index add --binary-image registry.redhat.io/openshift4/ose-operator-registry-rhel9:v4.16 --from-index <your_registry_image> --bundles \"\" -t \\<your_registry_image>", "apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-catsrc namespace: my-ns spec: sourceType: grpc grpcPodConfig: securityContextConfig: legacy image: my-image:latest", "apiVersion: v1 kind: Namespace metadata: labels: security.openshift.io/scc.podSecurityLabelSync: \"false\" 1 openshift.io/cluster-monitoring: \"true\" pod-security.kubernetes.io/enforce: baseline 2 name: \"<namespace_name>\"", "apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog namespace: openshift-marketplace 1 annotations: olm.catalogImageTemplate: 2 \"<registry>/<namespace>/<index_image_name>:v{kube_major_version}.{kube_minor_version}.{kube_patch_version}\" spec: sourceType: grpc grpcPodConfig: securityContextConfig: <security_mode> 3 image: <registry>/<namespace>/<index_image_name>:<tag> 4 displayName: My Operator Catalog publisher: <publisher_name> 5 updateStrategy: registryPoll: 6 interval: 30m", "oc apply -f catalogSource.yaml", "oc get pods -n openshift-marketplace", "NAME READY STATUS RESTARTS AGE my-operator-catalog-6njx6 1/1 Running 0 28s marketplace-operator-d9f549946-96sgr 1/1 Running 0 26h", "oc get catalogsource -n openshift-marketplace", "NAME DISPLAY TYPE PUBLISHER AGE my-operator-catalog My Operator Catalog grpc 5s", "oc get packagemanifest -n openshift-marketplace", "NAME CATALOG AGE jaeger-product My Operator Catalog 93s", "podman login <registry>:<port>", "{ \"auths\": { \"registry.redhat.io\": { \"auth\": \"FrNHNydQXdzclNqdg==\" }, \"quay.io\": { \"auth\": \"fegdsRib21iMQ==\" }, \"https://quay.io/my-namespace/my-user/my-image\": { \"auth\": \"eWfjwsDdfsa221==\" }, \"https://quay.io/my-namespace/my-user\": { \"auth\": \"feFweDdscw34rR==\" }, \"https://quay.io/my-namespace\": { \"auth\": \"frwEews4fescyq==\" } } }", "{ \"auths\": { \"registry.redhat.io\": { \"auth\": \"FrNHNydQXdzclNqdg==\" } } }", "{ \"auths\": { \"quay.io\": { \"auth\": \"Xd2lhdsbnRib21iMQ==\" } } }", "oc create secret generic <secret_name> -n openshift-marketplace --from-file=.dockerconfigjson=<path/to/registry/credentials> --type=kubernetes.io/dockerconfigjson", "apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog namespace: openshift-marketplace spec: sourceType: grpc secrets: 1 - \"<secret_name_1>\" - \"<secret_name_2>\" grpcPodConfig: securityContextConfig: <security_mode> 2 image: <registry>:<port>/<namespace>/<image>:<tag> displayName: My Operator Catalog publisher: <publisher_name> updateStrategy: registryPoll: interval: 30m", "oc extract secret/pull-secret -n openshift-config --confirm", "cat .dockerconfigjson | jq --compact-output '.auths[\"<registry>:<port>/<namespace>/\"] |= . + {\"auth\":\"<token>\"}' \\ 1 > new_dockerconfigjson", "oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=new_dockerconfigjson", "oc create secret generic <secret_name> -n <tenant_namespace> --from-file=.dockerconfigjson=<path/to/registry/credentials> --type=kubernetes.io/dockerconfigjson", "oc get sa -n <tenant_namespace> 1", "NAME SECRETS AGE builder 2 6m1s default 2 6m1s deployer 2 6m1s etcd-operator 2 5m18s 1", "oc secrets link <operator_sa> -n <tenant_namespace> <secret_name> --for=pull", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'", "apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog 1 namespace: openshift-marketplace 2 spec: sourceType: grpc grpcPodConfig: securityContextConfig: <security_mode> 3 image: <registry>/<namespace>/redhat-operator-index:v4.16 4 displayName: My Operator Catalog publisher: <publisher_name> 5 updateStrategy: registryPoll: 6 interval: 30m", "oc apply -f catalogSource.yaml", "oc get pods -n openshift-marketplace", "NAME READY STATUS RESTARTS AGE my-operator-catalog-6njx6 1/1 Running 0 28s marketplace-operator-d9f549946-96sgr 1/1 Running 0 26h", "oc get catalogsource -n openshift-marketplace", "NAME DISPLAY TYPE PUBLISHER AGE my-operator-catalog My Operator Catalog grpc 5s", "oc get packagemanifest -n openshift-marketplace", "NAME CATALOG AGE jaeger-product My Operator Catalog 93s", "oc patch operatorhub cluster -p '{\"spec\": {\"disableAllDefaultSources\": true}}' --type=merge", "grpcPodConfig: nodeSelector: custom_label: <label>", "grpcPodConfig: priorityClassName: <priority_class>", "apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: example-catalog namespace: openshift-marketplace annotations: operatorframework.io/priorityclass: system-cluster-critical", "grpcPodConfig: tolerations: - key: \"<key_name>\" operator: \"<operator_type>\" value: \"<value>\" effect: \"<effect>\"", "oc get subs -n <operator_namespace>", "oc describe sub <subscription_name> -n <operator_namespace>", "Name: cluster-logging Namespace: openshift-logging Labels: operators.coreos.com/cluster-logging.openshift-logging= Annotations: <none> API Version: operators.coreos.com/v1alpha1 Kind: Subscription Conditions: Last Transition Time: 2019-07-29T13:42:57Z Message: all available catalogsources are healthy Reason: AllCatalogSourcesHealthy Status: False Type: CatalogSourcesUnhealthy", "oc get catalogsources -n openshift-marketplace", "NAME DISPLAY TYPE PUBLISHER AGE certified-operators Certified Operators grpc Red Hat 55m community-operators Community Operators grpc Red Hat 55m example-catalog Example Catalog grpc Example Org 2m25s redhat-marketplace Red Hat Marketplace grpc Red Hat 55m redhat-operators Red Hat Operators grpc Red Hat 55m", "oc describe catalogsource example-catalog -n openshift-marketplace", "Name: example-catalog Namespace: openshift-marketplace Labels: <none> Annotations: operatorframework.io/managed-by: marketplace-operator target.workload.openshift.io/management: {\"effect\": \"PreferredDuringScheduling\"} API Version: operators.coreos.com/v1alpha1 Kind: CatalogSource Status: Connection State: Address: example-catalog.openshift-marketplace.svc:50051 Last Connect: 2021-09-09T17:07:35Z Last Observed State: TRANSIENT_FAILURE Registry Service: Created At: 2021-09-09T17:05:45Z Port: 50051 Protocol: grpc Service Name: example-catalog Service Namespace: openshift-marketplace", "oc get pods -n openshift-marketplace", "NAME READY STATUS RESTARTS AGE certified-operators-cv9nn 1/1 Running 0 36m community-operators-6v8lp 1/1 Running 0 36m marketplace-operator-86bfc75f9b-jkgbc 1/1 Running 0 42m example-catalog-bwt8z 0/1 ImagePullBackOff 0 3m55s redhat-marketplace-57p8c 1/1 Running 0 36m redhat-operators-smxx8 1/1 Running 0 36m", "oc describe pod example-catalog-bwt8z -n openshift-marketplace", "Name: example-catalog-bwt8z Namespace: openshift-marketplace Priority: 0 Node: ci-ln-jyryyg2-f76d1-ggdbq-worker-b-vsxjd/10.0.128.2 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 48s default-scheduler Successfully assigned openshift-marketplace/example-catalog-bwt8z to ci-ln-jyryyf2-f76d1-fgdbq-worker-b-vsxjd Normal AddedInterface 47s multus Add eth0 [10.131.0.40/23] from openshift-sdn Normal BackOff 20s (x2 over 46s) kubelet Back-off pulling image \"quay.io/example-org/example-catalog:v1\" Warning Failed 20s (x2 over 46s) kubelet Error: ImagePullBackOff Normal Pulling 8s (x3 over 47s) kubelet Pulling image \"quay.io/example-org/example-catalog:v1\" Warning Failed 8s (x3 over 47s) kubelet Failed to pull image \"quay.io/example-org/example-catalog:v1\": rpc error: code = Unknown desc = reading manifest v1 in quay.io/example-org/example-catalog: unauthorized: access to the requested resource is not authorized Warning Failed 8s (x3 over 47s) kubelet Error: ErrImagePull", "oc get clusteroperators", "oc get pod -n <operator_namespace>", "oc describe pod <operator_pod_name> -n <operator_namespace>", "oc debug node/my-node", "chroot /host", "crictl ps", "crictl ps --name network-operator", "oc get pods -n <operator_namespace>", "oc logs pod/<pod_name> -n <operator_namespace>", "oc logs pod/<operator_pod_name> -c <container_name> -n <operator_namespace>", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl pods", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspectp <operator_pod_id>", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl ps --pod=<operator_pod_id>", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspect <container_id>", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id>", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool spec: paused: true 1", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool spec: paused: false 1", "oc patch --type=merge --patch='{\"spec\":{\"paused\":true}}' machineconfigpool/master", "oc patch --type=merge --patch='{\"spec\":{\"paused\":true}}' machineconfigpool/worker", "oc get machineconfigpool/master --template='{{.spec.paused}}'", "oc get machineconfigpool/worker --template='{{.spec.paused}}'", "true", "oc get machineconfigpool", "NAME CONFIG UPDATED UPDATING master rendered-master-33cf0a1254318755d7b48002c597bf91 True False worker rendered-worker-e405a5bdb0db1295acea08bcca33fa60 False False", "oc patch --type=merge --patch='{\"spec\":{\"paused\":false}}' machineconfigpool/master", "oc patch --type=merge --patch='{\"spec\":{\"paused\":false}}' machineconfigpool/worker", "oc get machineconfigpool/master --template='{{.spec.paused}}'", "oc get machineconfigpool/worker --template='{{.spec.paused}}'", "false", "oc get machineconfigpool", "NAME CONFIG UPDATED UPDATING master rendered-master-546383f80705bd5aeaba93 True False worker rendered-worker-b4c51bb33ccaae6fc4a6a5 False True", "ImagePullBackOff for Back-off pulling image \"example.com/openshift4/ose-elasticsearch-operator-bundle@sha256:6d2587129c846ec28d384540322b40b05833e7e00b25cca584e004af9a1d292e\"", "rpc error: code = Unknown desc = error pinging docker registry example.com: Get \"https://example.com/v2/\": dial tcp: lookup example.com on 10.0.0.1:53: no such host", "oc get sub,csv -n <namespace>", "NAME PACKAGE SOURCE CHANNEL subscription.operators.coreos.com/elasticsearch-operator elasticsearch-operator redhat-operators 5.0 NAME DISPLAY VERSION REPLACES PHASE clusterserviceversion.operators.coreos.com/elasticsearch-operator.5.0.0-65 OpenShift Elasticsearch Operator 5.0.0-65 Succeeded", "oc delete subscription <subscription_name> -n <namespace>", "oc delete csv <csv_name> -n <namespace>", "oc get job,configmap -n openshift-marketplace", "NAME COMPLETIONS DURATION AGE job.batch/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 1/1 26s 9m30s NAME DATA AGE configmap/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 3 9m30s", "oc delete job <job_name> -n openshift-marketplace", "oc delete configmap <configmap_name> -n openshift-marketplace", "oc get sub,csv,installplan -n <namespace>", "message: 'Failed to delete all resource types, 1 remaining: Internal error occurred: error resolving resource'", "oc get namespaces", "operator-ns-1 Terminating", "oc get crds", "oc delete crd <crd_name>", "oc get EtcdCluster -n <namespace_name>", "oc get EtcdCluster --all-namespaces", "oc delete <cr_name> <cr_instance_name> -n <namespace_name>", "oc get namespace <namespace_name>", "oc get sub,csv,installplan -n <namespace>", "tar xvf operator-sdk-v1.36.1-ocp-linux-x86_64.tar.gz", "chmod +x operator-sdk", "echo USDPATH", "sudo mv ./operator-sdk /usr/local/bin/operator-sdk", "operator-sdk version", "operator-sdk version: \"v1.36.1-ocp\",", "tar xvf operator-sdk-v1.36.1-ocp-darwin-x86_64.tar.gz", "tar xvf operator-sdk-v1.36.1-ocp-darwin-aarch64.tar.gz", "chmod +x operator-sdk", "echo USDPATH", "sudo mv ./operator-sdk /usr/local/bin/operator-sdk", "operator-sdk version", "operator-sdk version: \"v1.36.1-ocp\",", "mkdir memcached-operator", "cd memcached-operator", "operator-sdk init --domain=example.com --repo=github.com/example-inc/memcached-operator", "operator-sdk create api --resource=true --controller=true --group cache --version v1 --kind Memcached", "make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>", "make install", "make deploy IMG=<registry>/<user>/<image_name>:<tag>", "oc apply -f config/samples/cache_v1_memcached.yaml -n memcached-operator-system", "oc logs deployment.apps/memcached-operator-controller-manager -c manager -n memcached-operator-system", "oc delete -f config/samples/cache_v1_memcached.yaml -n memcached-operator-system", "make undeploy", "mkdir -p USDHOME/projects/memcached-operator", "cd USDHOME/projects/memcached-operator", "export GO111MODULE=on", "operator-sdk init --domain=example.com --repo=github.com/example-inc/memcached-operator", "domain: example.com layout: - go.kubebuilder.io/v3 projectName: memcached-operator repo: github.com/example-inc/memcached-operator version: \"3\" plugins: manifests.sdk.operatorframework.io/v2: {} scorecard.sdk.operatorframework.io/v2: {} sdk.x-openshift.io/v1: {}", "mgr, err := ctrl.NewManager(cfg, manager.Options{Namespace: namespace})", "mgr, err := ctrl.NewManager(cfg, manager.Options{Namespace: \"\"})", "var namespaces []string 1 mgr, err := ctrl.NewManager(cfg, manager.Options{ 2 NewCache: cache.MultiNamespacedCacheBuilder(namespaces), })", "operator-sdk edit --multigroup=true", "domain: example.com layout: go.kubebuilder.io/v3 multigroup: true", "operator-sdk create api --group=cache --version=v1 --kind=Memcached", "Create Resource [y/n] y Create Controller [y/n] y", "Writing scaffold for you to edit api/v1/memcached_types.go controllers/memcached_controller.go", "// MemcachedSpec defines the desired state of Memcached type MemcachedSpec struct { // +kubebuilder:validation:Minimum=0 // Size is the size of the memcached deployment Size int32 `json:\"size\"` } // MemcachedStatus defines the observed state of Memcached type MemcachedStatus struct { // Nodes are the names of the memcached pods Nodes []string `json:\"nodes\"` }", "make generate", "make manifests", "/* Copyright 2020. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package controllers import ( appsv1 \"k8s.io/api/apps/v1\" corev1 \"k8s.io/api/core/v1\" \"k8s.io/apimachinery/pkg/api/errors\" metav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\" \"k8s.io/apimachinery/pkg/types\" \"reflect\" \"context\" \"github.com/go-logr/logr\" \"k8s.io/apimachinery/pkg/runtime\" ctrl \"sigs.k8s.io/controller-runtime\" \"sigs.k8s.io/controller-runtime/pkg/client\" ctrllog \"sigs.k8s.io/controller-runtime/pkg/log\" cachev1 \"github.com/example-inc/memcached-operator/api/v1\" ) // MemcachedReconciler reconciles a Memcached object type MemcachedReconciler struct { client.Client Log logr.Logger Scheme *runtime.Scheme } // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/status,verbs=get;update;patch // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/finalizers,verbs=update // +kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=core,resources=pods,verbs=get;list; // Reconcile is part of the main kubernetes reconciliation loop which aims to // move the current state of the cluster closer to the desired state. // TODO(user): Modify the Reconcile function to compare the state specified by // the Memcached object against the actual cluster state, and then // perform operations to make the cluster state reflect the state specified by // the user. // // For more details, check Reconcile and its Result here: // - https://pkg.go.dev/sigs.k8s.io/[email protected]/pkg/reconcile func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { //log := r.Log.WithValues(\"memcached\", req.NamespacedName) log := ctrllog.FromContext(ctx) // Fetch the Memcached instance memcached := &cachev1.Memcached{} err := r.Get(ctx, req.NamespacedName, memcached) if err != nil { if errors.IsNotFound(err) { // Request object not found, could have been deleted after reconcile request. // Owned objects are automatically garbage collected. For additional cleanup logic use finalizers. // Return and don't requeue log.Info(\"Memcached resource not found. Ignoring since object must be deleted\") return ctrl.Result{}, nil } // Error reading the object - requeue the request. log.Error(err, \"Failed to get Memcached\") return ctrl.Result{}, err } // Check if the deployment already exists, if not create a new one found := &appsv1.Deployment{} err = r.Get(ctx, types.NamespacedName{Name: memcached.Name, Namespace: memcached.Namespace}, found) if err != nil && errors.IsNotFound(err) { // Define a new deployment dep := r.deploymentForMemcached(memcached) log.Info(\"Creating a new Deployment\", \"Deployment.Namespace\", dep.Namespace, \"Deployment.Name\", dep.Name) err = r.Create(ctx, dep) if err != nil { log.Error(err, \"Failed to create new Deployment\", \"Deployment.Namespace\", dep.Namespace, \"Deployment.Name\", dep.Name) return ctrl.Result{}, err } // Deployment created successfully - return and requeue return ctrl.Result{Requeue: true}, nil } else if err != nil { log.Error(err, \"Failed to get Deployment\") return ctrl.Result{}, err } // Ensure the deployment size is the same as the spec size := memcached.Spec.Size if *found.Spec.Replicas != size { found.Spec.Replicas = &size err = r.Update(ctx, found) if err != nil { log.Error(err, \"Failed to update Deployment\", \"Deployment.Namespace\", found.Namespace, \"Deployment.Name\", found.Name) return ctrl.Result{}, err } // Spec updated - return and requeue return ctrl.Result{Requeue: true}, nil } // Update the Memcached status with the pod names // List the pods for this memcached's deployment podList := &corev1.PodList{} listOpts := []client.ListOption{ client.InNamespace(memcached.Namespace), client.MatchingLabels(labelsForMemcached(memcached.Name)), } if err = r.List(ctx, podList, listOpts...); err != nil { log.Error(err, \"Failed to list pods\", \"Memcached.Namespace\", memcached.Namespace, \"Memcached.Name\", memcached.Name) return ctrl.Result{}, err } podNames := getPodNames(podList.Items) // Update status.Nodes if needed if !reflect.DeepEqual(podNames, memcached.Status.Nodes) { memcached.Status.Nodes = podNames err := r.Status().Update(ctx, memcached) if err != nil { log.Error(err, \"Failed to update Memcached status\") return ctrl.Result{}, err } } return ctrl.Result{}, nil } // deploymentForMemcached returns a memcached Deployment object func (r *MemcachedReconciler) deploymentForMemcached(m *cachev1.Memcached) *appsv1.Deployment { ls := labelsForMemcached(m.Name) replicas := m.Spec.Size dep := &appsv1.Deployment{ ObjectMeta: metav1.ObjectMeta{ Name: m.Name, Namespace: m.Namespace, }, Spec: appsv1.DeploymentSpec{ Replicas: &replicas, Selector: &metav1.LabelSelector{ MatchLabels: ls, }, Template: corev1.PodTemplateSpec{ ObjectMeta: metav1.ObjectMeta{ Labels: ls, }, Spec: corev1.PodSpec{ Containers: []corev1.Container{{ Image: \"memcached:1.4.36-alpine\", Name: \"memcached\", Command: []string{\"memcached\", \"-m=64\", \"-o\", \"modern\", \"-v\"}, Ports: []corev1.ContainerPort{{ ContainerPort: 11211, Name: \"memcached\", }}, }}, }, }, }, } // Set Memcached instance as the owner and controller ctrl.SetControllerReference(m, dep, r.Scheme) return dep } // labelsForMemcached returns the labels for selecting the resources // belonging to the given memcached CR name. func labelsForMemcached(name string) map[string]string { return map[string]string{\"app\": \"memcached\", \"memcached_cr\": name} } // getPodNames returns the pod names of the array of pods passed in func getPodNames(pods []corev1.Pod) []string { var podNames []string for _, pod := range pods { podNames = append(podNames, pod.Name) } return podNames } // SetupWithManager sets up the controller with the Manager. func (r *MemcachedReconciler) SetupWithManager(mgr ctrl.Manager) error { return ctrl.NewControllerManagedBy(mgr). For(&cachev1.Memcached{}). Owns(&appsv1.Deployment{}). Complete(r) }", "import ( appsv1 \"k8s.io/api/apps/v1\" ) func (r *MemcachedReconciler) SetupWithManager(mgr ctrl.Manager) error { return ctrl.NewControllerManagedBy(mgr). For(&cachev1.Memcached{}). Owns(&appsv1.Deployment{}). Complete(r) }", "func (r *MemcachedReconciler) SetupWithManager(mgr ctrl.Manager) error { return ctrl.NewControllerManagedBy(mgr). For(&cachev1.Memcached{}). Owns(&appsv1.Deployment{}). WithOptions(controller.Options{ MaxConcurrentReconciles: 2, }). Complete(r) }", "import ( ctrl \"sigs.k8s.io/controller-runtime\" cachev1 \"github.com/example-inc/memcached-operator/api/v1\" ) func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { // Lookup the Memcached instance for this reconcile request memcached := &cachev1.Memcached{} err := r.Get(ctx, req.NamespacedName, memcached) }", "// Reconcile successful - don't requeue return ctrl.Result{}, nil // Reconcile failed due to error - requeue return ctrl.Result{}, err // Requeue for any reason other than an error return ctrl.Result{Requeue: true}, nil", "import \"time\" // Reconcile for any reason other than an error after 5 seconds return ctrl.Result{RequeueAfter: time.Second*5}, nil", "// +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/status,verbs=get;update;patch // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/finalizers,verbs=update // +kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=core,resources=pods,verbs=get;list; func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { }", "import ( \"github.com/operator-framework/operator-lib/proxy\" )", "for i, container := range dep.Spec.Template.Spec.Containers { dep.Spec.Template.Spec.Containers[i].Env = append(container.Env, proxy.ReadProxyVarsFromEnv()...) }", "containers: - args: - --leader-elect - --leader-election-id=ansible-proxy-demo image: controller:latest name: manager env: - name: \"HTTP_PROXY\" value: \"http_proxy_test\"", "make install run", "2021-01-10T21:09:29.016-0700 INFO controller-runtime.metrics metrics server is starting to listen {\"addr\": \":8080\"} 2021-01-10T21:09:29.017-0700 INFO setup starting manager 2021-01-10T21:09:29.017-0700 INFO controller-runtime.manager starting metrics server {\"path\": \"/metrics\"} 2021-01-10T21:09:29.018-0700 INFO controller-runtime.manager.controller.memcached Starting EventSource {\"reconciler group\": \"cache.example.com\", \"reconciler kind\": \"Memcached\", \"source\": \"kind source: /, Kind=\"} 2021-01-10T21:09:29.218-0700 INFO controller-runtime.manager.controller.memcached Starting Controller {\"reconciler group\": \"cache.example.com\", \"reconciler kind\": \"Memcached\"} 2021-01-10T21:09:29.218-0700 INFO controller-runtime.manager.controller.memcached Starting workers {\"reconciler group\": \"cache.example.com\", \"reconciler kind\": \"Memcached\", \"worker count\": 1}", "make docker-build IMG=<registry>/<user>/<image_name>:<tag>", "make docker-push IMG=<registry>/<user>/<image_name>:<tag>", "make deploy IMG=<registry>/<user>/<image_name>:<tag>", "oc get deployment -n <project_name>-system", "NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m", "make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>", "make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>", "make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>", "make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>", "docker push <registry>/<user>/<bundle_image_name>:<tag>", "operator-sdk run bundle \\ 1 -n <namespace> \\ 2 <registry>/<user>/<bundle_image_name>:<tag> 3", "oc project memcached-operator-system", "apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: size: 3", "oc apply -f config/samples/cache_v1_memcached.yaml", "oc get deployments", "NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 8m memcached-sample 3/3 3 3 1m", "oc get pods", "NAME READY STATUS RESTARTS AGE memcached-sample-6fd7c98d8-7dqdr 1/1 Running 0 1m memcached-sample-6fd7c98d8-g5k7v 1/1 Running 0 1m memcached-sample-6fd7c98d8-m7vn7 1/1 Running 0 1m", "oc get memcached/memcached-sample -o yaml", "apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: size: 3 status: nodes: - memcached-sample-6fd7c98d8-7dqdr - memcached-sample-6fd7c98d8-g5k7v - memcached-sample-6fd7c98d8-m7vn7", "oc patch memcached memcached-sample -p '{\"spec\":{\"size\": 5}}' --type=merge", "oc get deployments", "NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 10m memcached-sample 5/5 5 5 3m", "oc delete -f config/samples/cache_v1_memcached.yaml", "make undeploy", "operator-sdk cleanup <project_name>", "Set the Operator SDK version to use. By default, what is installed on the system is used. This is useful for CI or a project to utilize a specific version of the operator-sdk toolkit. OPERATOR_SDK_VERSION ?= v1.36.1-ocp", "containers: - name: kube-rbac-proxy image: registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9:v4.16", "docker-buildx: ## Build and push the Docker image for the manager for multi-platform support - docker buildx create --name project-v3-builder docker buildx use project-v3-builder - docker buildx build --push --platform=USD(PLATFORMS) --tag USD{IMG} -f Dockerfile . - docker buildx rm project-v3-builder", "k8s.io/api v0.29.2 k8s.io/apimachinery v0.29.2 k8s.io/client-go v0.29.2 sigs.k8s.io/controller-runtime v0.17.3", "go mod tidy", "+ .PHONY: build-installer + build-installer: manifests generate kustomize ## Generate a consolidated YAML with CRDs and deployment. + mkdir -p dist + cd config/manager && USD(KUSTOMIZE) edit set image controller=USD{IMG} + USD(KUSTOMIZE) build config/default > dist/install.yaml", "- ENVTEST_K8S_VERSION = 1.28.3 + ENVTEST_K8S_VERSION = 1.29.0", "- GOLANGCI_LINT = USD(shell pwd)/bin/golangci-lint - GOLANGCI_LINT_VERSION ?= v1.54.2 - golangci-lint: - @[ -f USD(GOLANGCI_LINT) ] || { - set -e ; - curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b USD(shell dirname USD(GOLANGCI_LINT)) USD(GOLANGCI_LINT_VERSION) ; - }", "- ## Tool Binaries - KUBECTL ?= kubectl - KUSTOMIZE ?= USD(LOCALBIN)/kustomize - CONTROLLER_GEN ?= USD(LOCALBIN)/controller-gen - ENVTEST ?= USD(LOCALBIN)/setup-envtest - - ## Tool Versions - KUSTOMIZE_VERSION ?= v5.2.1 - CONTROLLER_TOOLS_VERSION ?= v0.13.0 - - .PHONY: kustomize - kustomize: USD(KUSTOMIZE) ## Download kustomize locally if necessary. If wrong version is installed, it will be removed before downloading. - USD(KUSTOMIZE): USD(LOCALBIN) - @if test -x USD(LOCALBIN)/kustomize && ! USD(LOCALBIN)/kustomize version | grep -q USD(KUSTOMIZE_VERSION); then - echo \"USD(LOCALBIN)/kustomize version is not expected USD(KUSTOMIZE_VERSION). Removing it before installing.\"; - rm -rf USD(LOCALBIN)/kustomize; - fi - test -s USD(LOCALBIN)/kustomize || GOBIN=USD(LOCALBIN) GO111MODULE=on go install sigs.k8s.io/kustomize/kustomize/v5@USD(KUSTOMIZE_VERSION) - - .PHONY: controller-gen - controller-gen: USD(CONTROLLER_GEN) ## Download controller-gen locally if necessary. If wrong version is installed, it will be overwritten. - USD(CONTROLLER_GEN): USD(LOCALBIN) - test -s USD(LOCALBIN)/controller-gen && USD(LOCALBIN)/controller-gen --version | grep -q USD(CONTROLLER_TOOLS_VERSION) || - GOBIN=USD(LOCALBIN) go install sigs.k8s.io/controller-tools/cmd/controller-gen@USD(CONTROLLER_TOOLS_VERSION) - - .PHONY: envtest - envtest: USD(ENVTEST) ## Download envtest-setup locally if necessary. - USD(ENVTEST): USD(LOCALBIN) - test -s USD(LOCALBIN)/setup-envtest || GOBIN=USD(LOCALBIN) go install sigs.k8s.io/controller-runtime/tools/setup-envtest@latest + ## Tool Binaries + KUBECTL ?= kubectl + KUSTOMIZE ?= USD(LOCALBIN)/kustomize-USD(KUSTOMIZE_VERSION) + CONTROLLER_GEN ?= USD(LOCALBIN)/controller-gen-USD(CONTROLLER_TOOLS_VERSION) + ENVTEST ?= USD(LOCALBIN)/setup-envtest-USD(ENVTEST_VERSION) + GOLANGCI_LINT = USD(LOCALBIN)/golangci-lint-USD(GOLANGCI_LINT_VERSION) + + ## Tool Versions + KUSTOMIZE_VERSION ?= v5.3.0 + CONTROLLER_TOOLS_VERSION ?= v0.14.0 + ENVTEST_VERSION ?= release-0.17 + GOLANGCI_LINT_VERSION ?= v1.57.2 + + .PHONY: kustomize + kustomize: USD(KUSTOMIZE) ## Download kustomize locally if necessary. + USD(KUSTOMIZE): USD(LOCALBIN) + USD(call go-install-tool,USD(KUSTOMIZE),sigs.k8s.io/kustomize/kustomize/v5,USD(KUSTOMIZE_VERSION)) + + .PHONY: controller-gen + controller-gen: USD(CONTROLLER_GEN) ## Download controller-gen locally if necessary. + USD(CONTROLLER_GEN): USD(LOCALBIN) + USD(call go-install-tool,USD(CONTROLLER_GEN),sigs.k8s.io/controller-tools/cmd/controller-gen,USD(CONTROLLER_TOOLS_VERSION)) + + .PHONY: envtest + envtest: USD(ENVTEST) ## Download setup-envtest locally if necessary. + USD(ENVTEST): USD(LOCALBIN) + USD(call go-install-tool,USD(ENVTEST),sigs.k8s.io/controller-runtime/tools/setup-envtest,USD(ENVTEST_VERSION)) + + .PHONY: golangci-lint + golangci-lint: USD(GOLANGCI_LINT) ## Download golangci-lint locally if necessary. + USD(GOLANGCI_LINT): USD(LOCALBIN) + USD(call go-install-tool,USD(GOLANGCI_LINT),github.com/golangci/golangci-lint/cmd/golangci-lint,USD{GOLANGCI_LINT_VERSION}) + + # go-install-tool will 'go install' any package with custom target and name of binary, if it doesn't exist + # USD1 - target path with name of binary (ideally with version) + # USD2 - package url which can be installed + # USD3 - specific version of package + define go-install-tool + @[ -f USD(1) ] || { + set -e; + package=USD(2)@USD(3) ; + echo \"Downloading USDUSD{package}\" ; + GOBIN=USD(LOCALBIN) go install USDUSD{package} ; + mv \"USDUSD(echo \"USD(1)\" | sed \"s/-USD(3)USDUSD//\")\" USD(1) ; + } + endef", "mkdir memcached-operator", "cd memcached-operator", "operator-sdk init --plugins=ansible --domain=example.com", "operator-sdk create api --group cache --version v1 --kind Memcached --generate-role 1", "make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>", "make install", "make deploy IMG=<registry>/<user>/<image_name>:<tag>", "oc apply -f config/samples/cache_v1_memcached.yaml -n memcached-operator-system", "oc logs deployment.apps/memcached-operator-controller-manager -c manager -n memcached-operator-system", "I0205 17:48:45.881666 7 leaderelection.go:253] successfully acquired lease memcached-operator-system/memcached-operator {\"level\":\"info\",\"ts\":1612547325.8819902,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting EventSource\",\"source\":\"kind source: cache.example.com/v1, Kind=Memcached\"} {\"level\":\"info\",\"ts\":1612547325.98242,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting Controller\"} {\"level\":\"info\",\"ts\":1612547325.9824686,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting workers\",\"worker count\":4} {\"level\":\"info\",\"ts\":1612547348.8311093,\"logger\":\"runner\",\"msg\":\"Ansible-runner exited successfully\",\"job\":\"4037200794235010051\",\"name\":\"memcached-sample\",\"namespace\":\"memcached-operator-system\"}", "oc delete -f config/samples/cache_v1_memcached.yaml -n memcached-operator-system", "make undeploy", "mkdir -p USDHOME/projects/memcached-operator", "cd USDHOME/projects/memcached-operator", "operator-sdk init --plugins=ansible --domain=example.com", "domain: example.com layout: - ansible.sdk.operatorframework.io/v1 plugins: manifests.sdk.operatorframework.io/v2: {} scorecard.sdk.operatorframework.io/v2: {} sdk.x-openshift.io/v1: {} projectName: memcached-operator version: \"3\"", "operator-sdk create api --group cache --version v1 --kind Memcached --generate-role 1", "--- - name: start memcached k8s: definition: kind: Deployment apiVersion: apps/v1 metadata: name: '{{ ansible_operator_meta.name }}-memcached' namespace: '{{ ansible_operator_meta.namespace }}' spec: replicas: \"{{size}}\" selector: matchLabels: app: memcached template: metadata: labels: app: memcached spec: containers: - name: memcached command: - memcached - -m=64 - -o - modern - -v image: \"docker.io/memcached:1.4.36-alpine\" ports: - containerPort: 11211", "--- defaults file for Memcached size: 1", "apiVersion: cache.example.com/v1 kind: Memcached metadata: labels: app.kubernetes.io/name: memcached app.kubernetes.io/instance: memcached-sample app.kubernetes.io/part-of: memcached-operator app.kubernetes.io/managed-by: kustomize app.kubernetes.io/created-by: memcached-operator name: memcached-sample spec: size: 3", "env: - name: HTTP_PROXY value: '{{ lookup(\"env\", \"HTTP_PROXY\") | default(\"\", True) }}' - name: http_proxy value: '{{ lookup(\"env\", \"HTTP_PROXY\") | default(\"\", True) }}'", "containers: - args: - --leader-elect - --leader-election-id=ansible-proxy-demo image: controller:latest name: manager env: - name: \"HTTP_PROXY\" value: \"http_proxy_test\"", "make install run", "{\"level\":\"info\",\"ts\":1612589622.7888272,\"logger\":\"ansible-controller\",\"msg\":\"Watching resource\",\"Options.Group\":\"cache.example.com\",\"Options.Version\":\"v1\",\"Options.Kind\":\"Memcached\"} {\"level\":\"info\",\"ts\":1612589622.7897573,\"logger\":\"proxy\",\"msg\":\"Starting to serve\",\"Address\":\"127.0.0.1:8888\"} {\"level\":\"info\",\"ts\":1612589622.789971,\"logger\":\"controller-runtime.manager\",\"msg\":\"starting metrics server\",\"path\":\"/metrics\"} {\"level\":\"info\",\"ts\":1612589622.7899997,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting EventSource\",\"source\":\"kind source: cache.example.com/v1, Kind=Memcached\"} {\"level\":\"info\",\"ts\":1612589622.8904517,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting Controller\"} {\"level\":\"info\",\"ts\":1612589622.8905244,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting workers\",\"worker count\":8}", "make docker-build IMG=<registry>/<user>/<image_name>:<tag>", "make docker-push IMG=<registry>/<user>/<image_name>:<tag>", "make deploy IMG=<registry>/<user>/<image_name>:<tag>", "oc get deployment -n <project_name>-system", "NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m", "make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>", "make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>", "make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>", "make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>", "docker push <registry>/<user>/<bundle_image_name>:<tag>", "operator-sdk run bundle \\ 1 -n <namespace> \\ 2 <registry>/<user>/<bundle_image_name>:<tag> 3", "oc project memcached-operator-system", "apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: size: 3", "oc apply -f config/samples/cache_v1_memcached.yaml", "oc get deployments", "NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 8m memcached-sample 3/3 3 3 1m", "oc get pods", "NAME READY STATUS RESTARTS AGE memcached-sample-6fd7c98d8-7dqdr 1/1 Running 0 1m memcached-sample-6fd7c98d8-g5k7v 1/1 Running 0 1m memcached-sample-6fd7c98d8-m7vn7 1/1 Running 0 1m", "oc get memcached/memcached-sample -o yaml", "apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: size: 3 status: nodes: - memcached-sample-6fd7c98d8-7dqdr - memcached-sample-6fd7c98d8-g5k7v - memcached-sample-6fd7c98d8-m7vn7", "oc patch memcached memcached-sample -p '{\"spec\":{\"size\": 5}}' --type=merge", "oc get deployments", "NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 10m memcached-sample 5/5 5 5 3m", "oc delete -f config/samples/cache_v1_memcached.yaml", "make undeploy", "operator-sdk cleanup <project_name>", "Set the Operator SDK version to use. By default, what is installed on the system is used. This is useful for CI or a project to utilize a specific version of the operator-sdk toolkit. OPERATOR_SDK_VERSION ?= v1.36.1-ocp", "containers: - name: kube-rbac-proxy image: registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9:v4.16", "FROM registry.redhat.io/openshift4/ose-ansible-rhel9-operator:v4.16", "docker-buildx: ## Build and push the Docker image for the manager for multi-platform support - docker buildx create --name project-v3-builder docker buildx use project-v3-builder - docker buildx build --push --platform=USD(PLATFORMS) --tag USD{IMG} -f Dockerfile . - docker buildx rm project-v3-builder", "k8s.io/api v0.29.2 k8s.io/apimachinery v0.29.2 k8s.io/client-go v0.29.2 sigs.k8s.io/controller-runtime v0.17.3", "go mod tidy", "apiVersion: \"test1.example.com/v1alpha1\" kind: \"Test1\" metadata: name: \"example\" annotations: ansible.operator-sdk/reconcile-period: \"30s\"", "- version: v1alpha1 1 group: test1.example.com kind: Test1 role: /opt/ansible/roles/Test1 - version: v1alpha1 2 group: test2.example.com kind: Test2 playbook: /opt/ansible/playbook.yml - version: v1alpha1 3 group: test3.example.com kind: Test3 playbook: /opt/ansible/test3.yml reconcilePeriod: 0 manageStatus: false", "- version: v1alpha1 group: app.example.com kind: AppService playbook: /opt/ansible/playbook.yml maxRunnerArtifacts: 30 reconcilePeriod: 5s manageStatus: False watchDependentResources: False", "apiVersion: \"app.example.com/v1alpha1\" kind: \"Database\" metadata: name: \"example\" spec: message: \"Hello world 2\" newParameter: \"newParam\"", "{ \"meta\": { \"name\": \"<cr_name>\", \"namespace\": \"<cr_namespace>\", }, \"message\": \"Hello world 2\", \"new_parameter\": \"newParam\", \"_app_example_com_database\": { <full_crd> }, }", "--- - debug: msg: \"name: {{ ansible_operator_meta.name }}, {{ ansible_operator_meta.namespace }}\"", "sudo dnf install ansible", "pip install kubernetes", "ansible-galaxy collection install community.kubernetes", "ansible-galaxy collection install -r requirements.yml", "--- - name: set ConfigMap example-config to {{ state }} community.kubernetes.k8s: api_version: v1 kind: ConfigMap name: example-config namespace: <operator_namespace> 1 state: \"{{ state }}\" ignore_errors: true 2", "--- state: present", "--- - hosts: localhost roles: - <kind>", "ansible-playbook playbook.yml", "[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' PLAY [localhost] ******************************************************************************** TASK [Gathering Facts] ******************************************************************************** ok: [localhost] TASK [memcached : set ConfigMap example-config to present] ******************************************************************************** changed: [localhost] PLAY RECAP ******************************************************************************** localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0", "oc get configmaps", "NAME DATA AGE example-config 0 2m1s", "ansible-playbook playbook.yml --extra-vars state=absent", "[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' PLAY [localhost] ******************************************************************************** TASK [Gathering Facts] ******************************************************************************** ok: [localhost] TASK [memcached : set ConfigMap example-config to absent] ******************************************************************************** changed: [localhost] PLAY RECAP ******************************************************************************** localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0", "oc get configmaps", "apiVersion: \"test1.example.com/v1alpha1\" kind: \"Test1\" metadata: name: \"example\" annotations: ansible.operator-sdk/reconcile-period: \"30s\"", "make install", "/usr/bin/kustomize build config/crd | kubectl apply -f - customresourcedefinition.apiextensions.k8s.io/memcacheds.cache.example.com created", "make run", "/home/user/memcached-operator/bin/ansible-operator run {\"level\":\"info\",\"ts\":1612739145.2871568,\"logger\":\"cmd\",\"msg\":\"Version\",\"Go Version\":\"go1.15.5\",\"GOOS\":\"linux\",\"GOARCH\":\"amd64\",\"ansible-operator\":\"v1.10.1\",\"commit\":\"1abf57985b43bf6a59dcd18147b3c574fa57d3f6\"} {\"level\":\"info\",\"ts\":1612739148.347306,\"logger\":\"controller-runtime.metrics\",\"msg\":\"metrics server is starting to listen\",\"addr\":\":8080\"} {\"level\":\"info\",\"ts\":1612739148.3488882,\"logger\":\"watches\",\"msg\":\"Environment variable not set; using default value\",\"envVar\":\"ANSIBLE_VERBOSITY_MEMCACHED_CACHE_EXAMPLE_COM\",\"default\":2} {\"level\":\"info\",\"ts\":1612739148.3490262,\"logger\":\"cmd\",\"msg\":\"Environment variable not set; using default value\",\"Namespace\":\"\",\"envVar\":\"ANSIBLE_DEBUG_LOGS\",\"ANSIBLE_DEBUG_LOGS\":false} {\"level\":\"info\",\"ts\":1612739148.3490646,\"logger\":\"ansible-controller\",\"msg\":\"Watching resource\",\"Options.Group\":\"cache.example.com\",\"Options.Version\":\"v1\",\"Options.Kind\":\"Memcached\"} {\"level\":\"info\",\"ts\":1612739148.350217,\"logger\":\"proxy\",\"msg\":\"Starting to serve\",\"Address\":\"127.0.0.1:8888\"} {\"level\":\"info\",\"ts\":1612739148.3506632,\"logger\":\"controller-runtime.manager\",\"msg\":\"starting metrics server\",\"path\":\"/metrics\"} {\"level\":\"info\",\"ts\":1612739148.350784,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting EventSource\",\"source\":\"kind source: cache.example.com/v1, Kind=Memcached\"} {\"level\":\"info\",\"ts\":1612739148.5511978,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting Controller\"} {\"level\":\"info\",\"ts\":1612739148.5512562,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting workers\",\"worker count\":8}", "apiVersion: <group>.example.com/v1alpha1 kind: <kind> metadata: name: \"<kind>-sample\"", "oc apply -f config/samples/<gvk>.yaml", "oc get configmaps", "NAME STATUS AGE example-config Active 3s", "apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: state: absent", "oc apply -f config/samples/<gvk>.yaml", "oc get configmap", "make docker-build IMG=<registry>/<user>/<image_name>:<tag>", "make docker-push IMG=<registry>/<user>/<image_name>:<tag>", "make deploy IMG=<registry>/<user>/<image_name>:<tag>", "oc get deployment -n <project_name>-system", "NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m", "oc logs deployment/<project_name>-controller-manager -c manager \\ 1 -n <namespace> 2", "{\"level\":\"info\",\"ts\":1612732105.0579333,\"logger\":\"cmd\",\"msg\":\"Version\",\"Go Version\":\"go1.15.5\",\"GOOS\":\"linux\",\"GOARCH\":\"amd64\",\"ansible-operator\":\"v1.10.1\",\"commit\":\"1abf57985b43bf6a59dcd18147b3c574fa57d3f6\"} {\"level\":\"info\",\"ts\":1612732105.0587437,\"logger\":\"cmd\",\"msg\":\"WATCH_NAMESPACE environment variable not set. Watching all namespaces.\",\"Namespace\":\"\"} I0207 21:08:26.110949 7 request.go:645] Throttling request took 1.035521578s, request: GET:https://172.30.0.1:443/apis/flowcontrol.apiserver.k8s.io/v1alpha1?timeout=32s {\"level\":\"info\",\"ts\":1612732107.768025,\"logger\":\"controller-runtime.metrics\",\"msg\":\"metrics server is starting to listen\",\"addr\":\"127.0.0.1:8080\"} {\"level\":\"info\",\"ts\":1612732107.768796,\"logger\":\"watches\",\"msg\":\"Environment variable not set; using default value\",\"envVar\":\"ANSIBLE_VERBOSITY_MEMCACHED_CACHE_EXAMPLE_COM\",\"default\":2} {\"level\":\"info\",\"ts\":1612732107.7688773,\"logger\":\"cmd\",\"msg\":\"Environment variable not set; using default value\",\"Namespace\":\"\",\"envVar\":\"ANSIBLE_DEBUG_LOGS\",\"ANSIBLE_DEBUG_LOGS\":false} {\"level\":\"info\",\"ts\":1612732107.7688901,\"logger\":\"ansible-controller\",\"msg\":\"Watching resource\",\"Options.Group\":\"cache.example.com\",\"Options.Version\":\"v1\",\"Options.Kind\":\"Memcached\"} {\"level\":\"info\",\"ts\":1612732107.770032,\"logger\":\"proxy\",\"msg\":\"Starting to serve\",\"Address\":\"127.0.0.1:8888\"} I0207 21:08:27.770185 7 leaderelection.go:243] attempting to acquire leader lease memcached-operator-system/memcached-operator {\"level\":\"info\",\"ts\":1612732107.770202,\"logger\":\"controller-runtime.manager\",\"msg\":\"starting metrics server\",\"path\":\"/metrics\"} I0207 21:08:27.784854 7 leaderelection.go:253] successfully acquired lease memcached-operator-system/memcached-operator {\"level\":\"info\",\"ts\":1612732107.7850506,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting EventSource\",\"source\":\"kind source: cache.example.com/v1, Kind=Memcached\"} {\"level\":\"info\",\"ts\":1612732107.8853772,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting Controller\"} {\"level\":\"info\",\"ts\":1612732107.8854098,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting workers\",\"worker count\":4}", "containers: - name: manager env: - name: ANSIBLE_DEBUG_LOGS value: \"True\"", "apiVersion: \"cache.example.com/v1alpha1\" kind: \"Memcached\" metadata: name: \"example-memcached\" annotations: \"ansible.sdk.operatorframework.io/verbosity\": \"4\" spec: size: 4", "status: conditions: - ansibleResult: changed: 3 completion: 2018-12-03T13:45:57.13329 failures: 1 ok: 6 skipped: 0 lastTransitionTime: 2018-12-03T13:45:57Z message: 'Status code was -1 and not [200]: Request failed: <urlopen error [Errno 113] No route to host>' reason: Failed status: \"True\" type: Failure - lastTransitionTime: 2018-12-03T13:46:13Z message: Running reconciliation reason: Running status: \"True\" type: Running", "- version: v1 group: api.example.com kind: <kind> role: <role> manageStatus: false", "- operator_sdk.util.k8s_status: api_version: app.example.com/v1 kind: <kind> name: \"{{ ansible_operator_meta.name }}\" namespace: \"{{ ansible_operator_meta.namespace }}\" status: test: data", "collections: - operator_sdk.util", "k8s_status: status: key1: value1", "mkdir nginx-operator", "cd nginx-operator", "operator-sdk init --plugins=helm", "operator-sdk create api --group demo --version v1 --kind Nginx", "make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>", "make install", "make deploy IMG=<registry>/<user>/<image_name>:<tag>", "oc adm policy add-scc-to-user anyuid system:serviceaccount:nginx-operator-system:nginx-sample", "oc apply -f config/samples/demo_v1_nginx.yaml -n nginx-operator-system", "oc logs deployment.apps/nginx-operator-controller-manager -c manager -n nginx-operator-system", "oc delete -f config/samples/demo_v1_nginx.yaml -n nginx-operator-system", "make undeploy", "mkdir -p USDHOME/projects/nginx-operator", "cd USDHOME/projects/nginx-operator", "operator-sdk init --plugins=helm --domain=example.com --group=demo --version=v1 --kind=Nginx", "operator-sdk init --plugins helm --help", "domain: example.com layout: - helm.sdk.operatorframework.io/v1 plugins: manifests.sdk.operatorframework.io/v2: {} scorecard.sdk.operatorframework.io/v2: {} sdk.x-openshift.io/v1: {} projectName: nginx-operator resources: - api: crdVersion: v1 namespaced: true domain: example.com group: demo kind: Nginx version: v1 version: \"3\"", "Use the 'create api' subcommand to add watches to this file. - group: demo version: v1 kind: Nginx chart: helm-charts/nginx +kubebuilder:scaffold:watch", "apiVersion: demo.example.com/v1 kind: Nginx metadata: name: nginx-sample spec: replicaCount: 2", "apiVersion: demo.example.com/v1 kind: Nginx metadata: name: nginx-sample spec: replicaCount: 2 service: port: 8080", "- group: demo.example.com version: v1alpha1 kind: Nginx chart: helm-charts/nginx overrideValues: proxy.http: USDHTTP_PROXY", "proxy: http: \"\" https: \"\" no_proxy: \"\"", "containers: - name: {{ .Chart.Name }} securityContext: - toYaml {{ .Values.securityContext | nindent 12 }} image: \"{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}\" imagePullPolicy: {{ .Values.image.pullPolicy }} env: - name: http_proxy value: \"{{ .Values.proxy.http }}\"", "containers: - args: - --leader-elect - --leader-election-id=ansible-proxy-demo image: controller:latest name: manager env: - name: \"HTTP_PROXY\" value: \"http_proxy_test\"", "make install run", "{\"level\":\"info\",\"ts\":1612652419.9289865,\"logger\":\"controller-runtime.metrics\",\"msg\":\"metrics server is starting to listen\",\"addr\":\":8080\"} {\"level\":\"info\",\"ts\":1612652419.9296563,\"logger\":\"helm.controller\",\"msg\":\"Watching resource\",\"apiVersion\":\"demo.example.com/v1\",\"kind\":\"Nginx\",\"namespace\":\"\",\"reconcilePeriod\":\"1m0s\"} {\"level\":\"info\",\"ts\":1612652419.929983,\"logger\":\"controller-runtime.manager\",\"msg\":\"starting metrics server\",\"path\":\"/metrics\"} {\"level\":\"info\",\"ts\":1612652419.930015,\"logger\":\"controller-runtime.manager.controller.nginx-controller\",\"msg\":\"Starting EventSource\",\"source\":\"kind source: demo.example.com/v1, Kind=Nginx\"} {\"level\":\"info\",\"ts\":1612652420.2307851,\"logger\":\"controller-runtime.manager.controller.nginx-controller\",\"msg\":\"Starting Controller\"} {\"level\":\"info\",\"ts\":1612652420.2309358,\"logger\":\"controller-runtime.manager.controller.nginx-controller\",\"msg\":\"Starting workers\",\"worker count\":8}", "make docker-build IMG=<registry>/<user>/<image_name>:<tag>", "make docker-push IMG=<registry>/<user>/<image_name>:<tag>", "make deploy IMG=<registry>/<user>/<image_name>:<tag>", "oc get deployment -n <project_name>-system", "NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m", "make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>", "make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>", "make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>", "make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>", "docker push <registry>/<user>/<bundle_image_name>:<tag>", "operator-sdk run bundle \\ 1 -n <namespace> \\ 2 <registry>/<user>/<bundle_image_name>:<tag> 3", "oc project nginx-operator-system", "apiVersion: demo.example.com/v1 kind: Nginx metadata: name: nginx-sample spec: replicaCount: 3", "oc adm policy add-scc-to-user anyuid system:serviceaccount:nginx-operator-system:nginx-sample", "oc apply -f config/samples/demo_v1_nginx.yaml", "oc get deployments", "NAME READY UP-TO-DATE AVAILABLE AGE nginx-operator-controller-manager 1/1 1 1 8m nginx-sample 3/3 3 3 1m", "oc get pods", "NAME READY STATUS RESTARTS AGE nginx-sample-6fd7c98d8-7dqdr 1/1 Running 0 1m nginx-sample-6fd7c98d8-g5k7v 1/1 Running 0 1m nginx-sample-6fd7c98d8-m7vn7 1/1 Running 0 1m", "oc get nginx/nginx-sample -o yaml", "apiVersion: demo.example.com/v1 kind: Nginx metadata: name: nginx-sample spec: replicaCount: 3 status: nodes: - nginx-sample-6fd7c98d8-7dqdr - nginx-sample-6fd7c98d8-g5k7v - nginx-sample-6fd7c98d8-m7vn7", "oc patch nginx nginx-sample -p '{\"spec\":{\"replicaCount\": 5}}' --type=merge", "oc get deployments", "NAME READY UP-TO-DATE AVAILABLE AGE nginx-operator-controller-manager 1/1 1 1 10m nginx-sample 5/5 5 5 3m", "oc delete -f config/samples/demo_v1_nginx.yaml", "make undeploy", "operator-sdk cleanup <project_name>", "Set the Operator SDK version to use. By default, what is installed on the system is used. This is useful for CI or a project to utilize a specific version of the operator-sdk toolkit. OPERATOR_SDK_VERSION ?= v1.36.1-ocp", "containers: - name: kube-rbac-proxy image: registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9:v4.16", "FROM registry.redhat.io/openshift4/ose-helm-rhel9-operator:v4.16", "docker-buildx: ## Build and push the Docker image for the manager for multi-platform support - docker buildx create --name project-v3-builder docker buildx use project-v3-builder - docker buildx build --push --platform=USD(PLATFORMS) --tag USD{IMG} -f Dockerfile . - docker buildx rm project-v3-builder", "k8s.io/api v0.29.2 k8s.io/apimachinery v0.29.2 k8s.io/client-go v0.29.2 sigs.k8s.io/controller-runtime v0.17.3", "go mod tidy", "- curl -sSLo - https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize/v5.2.1/kustomize_v5.2.1_USD(OS)_USD(ARCH).tar.gz | + curl -sSLo - https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize/v5.3.0/kustomize_v5.3.0_USD(OS)_USD(ARCH).tar.gz | \\", "apiVersion: apache.org/v1alpha1 kind: Tomcat metadata: name: example-app spec: replicaCount: 2", "{{ .Values.replicaCount }}", "oc get Tomcats --all-namespaces", "mkdir -p USDHOME/github.com/example/memcached-operator", "cd USDHOME/github.com/example/memcached-operator", "operator-sdk init --plugins=hybrid.helm.sdk.operatorframework.io --project-version=\"3\" --domain my.domain --repo=github.com/example/memcached-operator", "operator-sdk create api --plugins helm.sdk.operatorframework.io/v1 --group cache --version v1 --kind Memcached", "operator-sdk create api --plugins helm.sdk.operatorframework.io/v1 --help", "Use the 'create api' subcommand to add watches to this file. - group: cache.my.domain version: v1 kind: Memcached chart: helm-charts/memcached #+kubebuilder:scaffold:watch", "// Operator's main.go // With the help of helpers provided in the library, the reconciler can be // configured here before starting the controller with this reconciler. reconciler := reconciler.New( reconciler.WithChart(*chart), reconciler.WithGroupVersionKind(gvk), ) if err := reconciler.SetupWithManager(mgr); err != nil { panic(fmt.Sprintf(\"unable to create reconciler: %s\", err)) }", "operator-sdk create api --group=cache --version v1 --kind MemcachedBackup --resource --controller --plugins=go/v4", "Create Resource [y/n] y Create Controller [y/n] y", "// MemcachedBackupSpec defines the desired state of MemcachedBackup type MemcachedBackupSpec struct { // INSERT ADDITIONAL SPEC FIELDS - desired state of cluster // Important: Run \"make\" to regenerate code after modifying this file //+kubebuilder:validation:Minimum=0 // Size is the size of the memcached deployment Size int32 `json:\"size\"` } // MemcachedBackupStatus defines the observed state of MemcachedBackup type MemcachedBackupStatus struct { // INSERT ADDITIONAL STATUS FIELD - define observed state of cluster // Important: Run \"make\" to regenerate code after modifying this file // Nodes are the names of the memcached pods Nodes []string `json:\"nodes\"` }", "make generate", "make manifests", "for _, w := range ws { // Register controller with the factory reconcilePeriod := defaultReconcilePeriod if w.ReconcilePeriod != nil { reconcilePeriod = w.ReconcilePeriod.Duration } maxConcurrentReconciles := defaultMaxConcurrentReconciles if w.MaxConcurrentReconciles != nil { maxConcurrentReconciles = *w.MaxConcurrentReconciles } r, err := reconciler.New( reconciler.WithChart(*w.Chart), reconciler.WithGroupVersionKind(w.GroupVersionKind), reconciler.WithOverrideValues(w.OverrideValues), reconciler.SkipDependentWatches(w.WatchDependentResources != nil && !*w.WatchDependentResources), reconciler.WithMaxConcurrentReconciles(maxConcurrentReconciles), reconciler.WithReconcilePeriod(reconcilePeriod), reconciler.WithInstallAnnotations(annotation.DefaultInstallAnnotations...), reconciler.WithUpgradeAnnotations(annotation.DefaultUpgradeAnnotations...), reconciler.WithUninstallAnnotations(annotation.DefaultUninstallAnnotations...), )", "// Setup manager with Go API if err = (&controllers.MemcachedBackupReconciler{ Client: mgr.GetClient(), Scheme: mgr.GetScheme(), }).SetupWithManager(mgr); err != nil { setupLog.Error(err, \"unable to create controller\", \"controller\", \"MemcachedBackup\") os.Exit(1) } // Setup manager with Helm API for _, w := range ws { if err := r.SetupWithManager(mgr); err != nil { setupLog.Error(err, \"unable to create controller\", \"controller\", \"Helm\") os.Exit(1) } setupLog.Info(\"configured watch\", \"gvk\", w.GroupVersionKind, \"chartPath\", w.ChartPath, \"maxConcurrentReconciles\", maxConcurrentReconciles, \"reconcilePeriod\", reconcilePeriod) } // Start the manager if err := mgr.Start(ctrl.SetupSignalHandler()); err != nil { setupLog.Error(err, \"problem running manager\") os.Exit(1) }", "--- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: manager-role rules: - apiGroups: - \"\" resources: - namespaces verbs: - get - apiGroups: - apps resources: - deployments - daemonsets - replicasets - statefulsets verbs: - create - delete - get - list - patch - update - watch - apiGroups: - cache.my.domain resources: - memcachedbackups verbs: - create - delete - get - list - patch - update - watch - apiGroups: - cache.my.domain resources: - memcachedbackups/finalizers verbs: - create - delete - get - list - patch - update - watch - apiGroups: - \"\" resources: - pods - services - services/finalizers - endpoints - persistentvolumeclaims - events - configmaps - secrets - serviceaccounts verbs: - create - delete - get - list - patch - update - watch - apiGroups: - cache.my.domain resources: - memcachedbackups/status verbs: - get - patch - update - apiGroups: - policy resources: - events - poddisruptionbudgets verbs: - create - delete - get - list - patch - update - watch - apiGroups: - cache.my.domain resources: - memcacheds - memcacheds/status - memcacheds/finalizers verbs: - create - delete - get - list - patch - update - watch", "make install run", "make docker-build IMG=<registry>/<user>/<image_name>:<tag>", "make docker-push IMG=<registry>/<user>/<image_name>:<tag>", "make deploy IMG=<registry>/<user>/<image_name>:<tag>", "oc get deployment -n <project_name>-system", "NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m", "oc project <project_name>-system", "apiVersion: cache.my.domain/v1 kind: Memcached metadata: name: memcached-sample spec: # Default values copied from <project_dir>/helm-charts/memcached/values.yaml affinity: {} autoscaling: enabled: false maxReplicas: 100 minReplicas: 1 targetCPUUtilizationPercentage: 80 fullnameOverride: \"\" image: pullPolicy: IfNotPresent repository: nginx tag: \"\" imagePullSecrets: [] ingress: annotations: {} className: \"\" enabled: false hosts: - host: chart-example.local paths: - path: / pathType: ImplementationSpecific tls: [] nameOverride: \"\" nodeSelector: {} podAnnotations: {} podSecurityContext: {} replicaCount: 3 resources: {} securityContext: {} service: port: 80 type: ClusterIP serviceAccount: annotations: {} create: true name: \"\" tolerations: []", "oc apply -f config/samples/cache_v1_memcached.yaml", "oc get pods", "NAME READY STATUS RESTARTS AGE memcached-sample-6fd7c98d8-7dqdr 1/1 Running 0 18m memcached-sample-6fd7c98d8-g5k7v 1/1 Running 0 18m memcached-sample-6fd7c98d8-m7vn7 1/1 Running 0 18m", "apiVersion: cache.my.domain/v1 kind: MemcachedBackup metadata: name: memcachedbackup-sample spec: size: 2", "oc apply -f config/samples/cache_v1_memcachedbackup.yaml", "oc get pods", "NAME READY STATUS RESTARTS AGE memcachedbackup-sample-8649699989-4bbzg 1/1 Running 0 22m memcachedbackup-sample-8649699989-mq6mx 1/1 Running 0 22m", "oc delete -f config/samples/cache_v1_memcached.yaml", "oc delete -f config/samples/cache_v1_memcachedbackup.yaml", "make undeploy", "Set the Operator SDK version to use. By default, what is installed on the system is used. This is useful for CI or a project to utilize a specific version of the operator-sdk toolkit. OPERATOR_SDK_VERSION ?= v1.36.1-ocp", "containers: - name: kube-rbac-proxy image: registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9:v4.16", "docker-buildx: ## Build and push the Docker image for the manager for multi-platform support - docker buildx create --name project-v3-builder docker buildx use project-v3-builder - docker buildx build --push --platform=USD(PLATFORMS) --tag USD{IMG} -f Dockerfile . - docker buildx rm project-v3-builder", "k8s.io/api v0.29.2 k8s.io/apimachinery v0.29.2 k8s.io/client-go v0.29.2 sigs.k8s.io/controller-runtime v0.17.3", "go mod tidy", "mkdir memcached-operator", "cd memcached-operator", "operator-sdk init --plugins=quarkus --domain=example.com --project-name=memcached-operator", "operator-sdk create api --plugins quarkus --group cache --version v1 --kind Memcached", "make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>", "make install", "make deploy IMG=<registry>/<user>/<image_name>:<tag>", "oc apply -f config/samples/cache_v1_memcached.yaml -n memcached-operator-system", "oc logs deployment.apps/memcached-operator-controller-manager -c manager -n memcached-operator-system", "oc delete -f config/samples/cache_v1_memcached.yaml -n memcached-operator-system", "make undeploy", "mkdir -p USDHOME/projects/memcached-operator", "cd USDHOME/projects/memcached-operator", "operator-sdk init --plugins=quarkus --domain=example.com --project-name=memcached-operator", "domain: example.com layout: - quarkus.javaoperatorsdk.io/v1-alpha projectName: memcached-operator version: \"3\"", "operator-sdk create api --plugins=quarkus \\ 1 --group=cache \\ 2 --version=v1 \\ 3 --kind=Memcached 4", "tree", ". ├── Makefile ├── PROJECT ├── pom.xml └── src └── main ├── java │ └── com │ └── example │ ├── Memcached.java │ ├── MemcachedReconciler.java │ ├── MemcachedSpec.java │ └── MemcachedStatus.java └── resources └── application.properties 6 directories, 8 files", "public class MemcachedSpec { private Integer size; public Integer getSize() { return size; } public void setSize(Integer size) { this.size = size; } }", "import java.util.ArrayList; import java.util.List; public class MemcachedStatus { // Add Status information here // Nodes are the names of the memcached pods private List<String> nodes; public List<String> getNodes() { if (nodes == null) { nodes = new ArrayList<>(); } return nodes; } public void setNodes(List<String> nodes) { this.nodes = nodes; } }", "@Version(\"v1\") @Group(\"cache.example.com\") public class Memcached extends CustomResource<MemcachedSpec, MemcachedStatus> implements Namespaced {}", "mvn clean install", "cat target/kubernetes/memcacheds.cache.example.com-v1.yaml", "Generated by Fabric8 CRDGenerator, manual edits might get overwritten! apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: memcacheds.cache.example.com spec: group: cache.example.com names: kind: Memcached plural: memcacheds singular: memcached scope: Namespaced versions: - name: v1 schema: openAPIV3Schema: properties: spec: properties: size: type: integer type: object status: properties: nodes: items: type: string type: array type: object type: object served: true storage: true subresources: status: {}", "apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: # Add spec fields here size: 1", "<dependency> <groupId>commons-collections</groupId> <artifactId>commons-collections</artifactId> <version>3.2.2</version> </dependency>", "package com.example; import io.fabric8.kubernetes.client.KubernetesClient; import io.javaoperatorsdk.operator.api.reconciler.Context; import io.javaoperatorsdk.operator.api.reconciler.Reconciler; import io.javaoperatorsdk.operator.api.reconciler.UpdateControl; import io.fabric8.kubernetes.api.model.ContainerBuilder; import io.fabric8.kubernetes.api.model.ContainerPortBuilder; import io.fabric8.kubernetes.api.model.LabelSelectorBuilder; import io.fabric8.kubernetes.api.model.ObjectMetaBuilder; import io.fabric8.kubernetes.api.model.OwnerReferenceBuilder; import io.fabric8.kubernetes.api.model.Pod; import io.fabric8.kubernetes.api.model.PodSpecBuilder; import io.fabric8.kubernetes.api.model.PodTemplateSpecBuilder; import io.fabric8.kubernetes.api.model.apps.Deployment; import io.fabric8.kubernetes.api.model.apps.DeploymentBuilder; import io.fabric8.kubernetes.api.model.apps.DeploymentSpecBuilder; import org.apache.commons.collections.CollectionUtils; import java.util.HashMap; import java.util.List; import java.util.Map; import java.util.stream.Collectors; public class MemcachedReconciler implements Reconciler<Memcached> { private final KubernetesClient client; public MemcachedReconciler(KubernetesClient client) { this.client = client; } // TODO Fill in the rest of the reconciler @Override public UpdateControl<Memcached> reconcile( Memcached resource, Context context) { // TODO: fill in logic Deployment deployment = client.apps() .deployments() .inNamespace(resource.getMetadata().getNamespace()) .withName(resource.getMetadata().getName()) .get(); if (deployment == null) { Deployment newDeployment = createMemcachedDeployment(resource); client.apps().deployments().create(newDeployment); return UpdateControl.noUpdate(); } int currentReplicas = deployment.getSpec().getReplicas(); int requiredReplicas = resource.getSpec().getSize(); if (currentReplicas != requiredReplicas) { deployment.getSpec().setReplicas(requiredReplicas); client.apps().deployments().createOrReplace(deployment); return UpdateControl.noUpdate(); } List<Pod> pods = client.pods() .inNamespace(resource.getMetadata().getNamespace()) .withLabels(labelsForMemcached(resource)) .list() .getItems(); List<String> podNames = pods.stream().map(p -> p.getMetadata().getName()).collect(Collectors.toList()); if (resource.getStatus() == null || !CollectionUtils.isEqualCollection(podNames, resource.getStatus().getNodes())) { if (resource.getStatus() == null) resource.setStatus(new MemcachedStatus()); resource.getStatus().setNodes(podNames); return UpdateControl.updateResource(resource); } return UpdateControl.noUpdate(); } private Map<String, String> labelsForMemcached(Memcached m) { Map<String, String> labels = new HashMap<>(); labels.put(\"app\", \"memcached\"); labels.put(\"memcached_cr\", m.getMetadata().getName()); return labels; } private Deployment createMemcachedDeployment(Memcached m) { Deployment deployment = new DeploymentBuilder() .withMetadata( new ObjectMetaBuilder() .withName(m.getMetadata().getName()) .withNamespace(m.getMetadata().getNamespace()) .build()) .withSpec( new DeploymentSpecBuilder() .withReplicas(m.getSpec().getSize()) .withSelector( new LabelSelectorBuilder().withMatchLabels(labelsForMemcached(m)).build()) .withTemplate( new PodTemplateSpecBuilder() .withMetadata( new ObjectMetaBuilder().withLabels(labelsForMemcached(m)).build()) .withSpec( new PodSpecBuilder() .withContainers( new ContainerBuilder() .withImage(\"memcached:1.4.36-alpine\") .withName(\"memcached\") .withCommand(\"memcached\", \"-m=64\", \"-o\", \"modern\", \"-v\") .withPorts( new ContainerPortBuilder() .withContainerPort(11211) .withName(\"memcached\") .build()) .build()) .build()) .build()) .build()) .build(); deployment.addOwnerReference(m); return deployment; } }", "Deployment deployment = client.apps() .deployments() .inNamespace(resource.getMetadata().getNamespace()) .withName(resource.getMetadata().getName()) .get();", "if (deployment == null) { Deployment newDeployment = createMemcachedDeployment(resource); client.apps().deployments().create(newDeployment); return UpdateControl.noUpdate(); }", "int currentReplicas = deployment.getSpec().getReplicas(); int requiredReplicas = resource.getSpec().getSize();", "if (currentReplicas != requiredReplicas) { deployment.getSpec().setReplicas(requiredReplicas); client.apps().deployments().createOrReplace(deployment); return UpdateControl.noUpdate(); }", "List<Pod> pods = client.pods() .inNamespace(resource.getMetadata().getNamespace()) .withLabels(labelsForMemcached(resource)) .list() .getItems(); List<String> podNames = pods.stream().map(p -> p.getMetadata().getName()).collect(Collectors.toList());", "if (resource.getStatus() == null || !CollectionUtils.isEqualCollection(podNames, resource.getStatus().getNodes())) { if (resource.getStatus() == null) resource.setStatus(new MemcachedStatus()); resource.getStatus().setNodes(podNames); return UpdateControl.updateResource(resource); }", "private Map<String, String> labelsForMemcached(Memcached m) { Map<String, String> labels = new HashMap<>(); labels.put(\"app\", \"memcached\"); labels.put(\"memcached_cr\", m.getMetadata().getName()); return labels; }", "private Deployment createMemcachedDeployment(Memcached m) { Deployment deployment = new DeploymentBuilder() .withMetadata( new ObjectMetaBuilder() .withName(m.getMetadata().getName()) .withNamespace(m.getMetadata().getNamespace()) .build()) .withSpec( new DeploymentSpecBuilder() .withReplicas(m.getSpec().getSize()) .withSelector( new LabelSelectorBuilder().withMatchLabels(labelsForMemcached(m)).build()) .withTemplate( new PodTemplateSpecBuilder() .withMetadata( new ObjectMetaBuilder().withLabels(labelsForMemcached(m)).build()) .withSpec( new PodSpecBuilder() .withContainers( new ContainerBuilder() .withImage(\"memcached:1.4.36-alpine\") .withName(\"memcached\") .withCommand(\"memcached\", \"-m=64\", \"-o\", \"modern\", \"-v\") .withPorts( new ContainerPortBuilder() .withContainerPort(11211) .withName(\"memcached\") .build()) .build()) .build()) .build()) .build()) .build(); deployment.addOwnerReference(m); return deployment; }", "mvn clean install", "[INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 11.193 s [INFO] Finished at: 2021-05-26T12:16:54-04:00 [INFO] ------------------------------------------------------------------------", "oc apply -f target/kubernetes/memcacheds.cache.example.com-v1.yml", "customresourcedefinition.apiextensions.k8s.io/memcacheds.cache.example.com created", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: memcached-operator-admin subjects: - kind: ServiceAccount name: memcached-quarkus-operator-operator namespace: <operator_namespace> roleRef: kind: ClusterRole name: cluster-admin apiGroup: \"\"", "oc apply -f rbac.yaml", "java -jar target/quarkus-app/quarkus-run.jar", "kubectl apply -f memcached-sample.yaml", "memcached.cache.example.com/memcached-sample created", "oc get all", "NAME READY STATUS RESTARTS AGE pod/memcached-sample-6c765df685-mfqnz 1/1 Running 0 18s", "make docker-build IMG=<registry>/<user>/<image_name>:<tag>", "make docker-push IMG=<registry>/<user>/<image_name>:<tag>", "oc apply -f target/kubernetes/memcacheds.cache.example.com-v1.yml", "customresourcedefinition.apiextensions.k8s.io/memcacheds.cache.example.com created", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: memcached-operator-admin subjects: - kind: ServiceAccount name: memcached-quarkus-operator-operator namespace: <operator_namespace> roleRef: kind: ClusterRole name: cluster-admin apiGroup: \"\"", "make deploy IMG=<registry>/<user>/<image_name>:<tag>", "oc apply -f rbac.yaml", "oc get all -n default", "NAME READY UP-TO-DATE AVAILABLE AGE pod/memcached-quarkus-operator-operator-7db86ccf58-k4mlm 0/1 Running 0 18s", "oc apply -f memcached-sample.yaml", "memcached.cache.example.com/memcached-sample created", "oc get all", "NAME READY STATUS RESTARTS AGE pod/memcached-quarkus-operator-operator-7b766f4896-kxnzt 1/1 Running 1 79s pod/memcached-sample-6c765df685-mfqnz 1/1 Running 0 18s", "make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>", "make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>", "make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>", "make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>", "docker push <registry>/<user>/<bundle_image_name>:<tag>", "operator-sdk run bundle \\ 1 -n <namespace> \\ 2 <registry>/<user>/<bundle_image_name>:<tag> 3", "Set the Operator SDK version to use. By default, what is installed on the system is used. This is useful for CI or a project to utilize a specific version of the operator-sdk toolkit. OPERATOR_SDK_VERSION ?= v1.36.1-ocp", "containers: - name: kube-rbac-proxy image: registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9:v4.16", "docker-buildx: ## Build and push the Docker image for the manager for multi-platform support - docker buildx create --name project-v3-builder docker buildx use project-v3-builder - docker buildx build --push --platform=USD(PLATFORMS) --tag USD{IMG} -f Dockerfile . - docker buildx rm project-v3-builder", "k8s.io/api v0.29.2 k8s.io/apimachinery v0.29.2 k8s.io/client-go v0.29.2 sigs.k8s.io/controller-runtime v0.17.3", "go mod tidy", "apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: annotations: features.operators.openshift.io/disconnected: \"true\" features.operators.openshift.io/fips-compliant: \"false\" features.operators.openshift.io/proxy-aware: \"false\" features.operators.openshift.io/tls-profiles: \"false\" features.operators.openshift.io/token-auth-aws: \"false\" features.operators.openshift.io/token-auth-azure: \"false\" features.operators.openshift.io/token-auth-gcp: \"false\"", "apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: annotations: operators.openshift.io/infrastructure-features: '[\"disconnected\", \"proxy-aware\"]'", "apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: annotations: operators.openshift.io/valid-subscription: '[\"OpenShift Container Platform\"]'", "apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: annotations: operators.openshift.io/valid-subscription: '[\"3Scale Commercial License\", \"Red Hat Managed Integration\"]'", "spec: spec: containers: - command: - /manager env: - name: <related_image_environment_variable> 1 value: \"<related_image_reference_with_tag>\" 2", "// deploymentForMemcached returns a memcached Deployment object Spec: corev1.PodSpec{ Containers: []corev1.Container{{ - Image: \"memcached:1.4.36-alpine\", 1 + Image: os.Getenv(\"<related_image_environment_variable>\"), 2 Name: \"memcached\", Command: []string{\"memcached\", \"-m=64\", \"-o\", \"modern\", \"-v\"}, Ports: []corev1.ContainerPort{{", "spec: containers: - name: memcached command: - memcached - -m=64 - -o - modern - -v - image: \"docker.io/memcached:1.4.36-alpine\" 1 + image: \"{{ lookup('env', '<related_image_environment_variable>') }}\" 2 ports: - containerPort: 11211", "- group: demo.example.com version: v1alpha1 kind: Memcached chart: helm-charts/memcached overrideValues: 1 relatedImage: USD{<related_image_environment_variable>} 2", "relatedImage: \"\"", "containers: - name: {{ .Chart.Name }} securityContext: - toYaml {{ .Values.securityContext | nindent 12 }} image: \"{{ .Values.image.pullPolicy }} env: 1 - name: related_image 2 value: \"{{ .Values.relatedImage }}\" 3", "BUNDLE_GEN_FLAGS ?= -q --overwrite --version USD(VERSION) USD(BUNDLE_METADATA_OPTS) # USE_IMAGE_DIGESTS defines if images are resolved via tags or digests # You can enable this value if you would like to use SHA Based Digests # To enable set flag to true USE_IMAGE_DIGESTS ?= false ifeq (USD(USE_IMAGE_DIGESTS), true) BUNDLE_GEN_FLAGS += --use-image-digests endif - USD(KUSTOMIZE) build config/manifests | operator-sdk generate bundle -q --overwrite --version USD(VERSION) USD(BUNDLE_METADATA_OPTS) 1 + USD(KUSTOMIZE) build config/manifests | operator-sdk generate bundle USD(BUNDLE_GEN_FLAGS) 2", "make bundle USE_IMAGE_DIGESTS=true", "metadata: annotations: operators.openshift.io/infrastructure-features: '[\"disconnected\"]'", "labels: operatorframework.io/arch.<arch>: supported 1 operatorframework.io/os.<os>: supported 2", "labels: operatorframework.io/os.linux: supported", "labels: operatorframework.io/arch.amd64: supported", "labels: operatorframework.io/arch.s390x: supported operatorframework.io/os.zos: supported operatorframework.io/os.linux: supported 1 operatorframework.io/arch.amd64: supported 2", "metadata: annotations: operatorframework.io/suggested-namespace: <namespace> 1", "metadata: annotations: operatorframework.io/suggested-namespace-template: 1 { \"apiVersion\": \"v1\", \"kind\": \"Namespace\", \"metadata\": { \"name\": \"vertical-pod-autoscaler-suggested-template\", \"annotations\": { \"openshift.io/node-selector\": \"\" } } }", "module github.com/example-inc/memcached-operator go 1.19 require ( k8s.io/apimachinery v0.26.0 k8s.io/client-go v0.26.0 sigs.k8s.io/controller-runtime v0.14.1 operator-framework/operator-lib v0.11.0 )", "import ( apiv1 \"github.com/operator-framework/api/pkg/operators/v1\" ) func NewUpgradeable(cl client.Client) (Condition, error) { return NewCondition(cl, \"apiv1.OperatorUpgradeable\") } cond, err := NewUpgradeable(cl);", "apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: webhook-operator.v0.0.1 spec: customresourcedefinitions: owned: - kind: WebhookTest name: webhooktests.webhook.operators.coreos.io 1 version: v1 install: spec: deployments: - name: webhook-operator-webhook strategy: deployment installModes: - supported: false type: OwnNamespace - supported: false type: SingleNamespace - supported: false type: MultiNamespace - supported: true type: AllNamespaces webhookdefinitions: - type: ValidatingAdmissionWebhook 2 admissionReviewVersions: - v1beta1 - v1 containerPort: 443 targetPort: 4343 deploymentName: webhook-operator-webhook failurePolicy: Fail generateName: vwebhooktest.kb.io rules: - apiGroups: - webhook.operators.coreos.io apiVersions: - v1 operations: - CREATE - UPDATE resources: - webhooktests sideEffects: None webhookPath: /validate-webhook-operators-coreos-io-v1-webhooktest - type: MutatingAdmissionWebhook 3 admissionReviewVersions: - v1beta1 - v1 containerPort: 443 targetPort: 4343 deploymentName: webhook-operator-webhook failurePolicy: Fail generateName: mwebhooktest.kb.io rules: - apiGroups: - webhook.operators.coreos.io apiVersions: - v1 operations: - CREATE - UPDATE resources: - webhooktests sideEffects: None webhookPath: /mutate-webhook-operators-coreos-io-v1-webhooktest - type: ConversionWebhook 4 admissionReviewVersions: - v1beta1 - v1 containerPort: 443 targetPort: 4343 deploymentName: webhook-operator-webhook generateName: cwebhooktest.kb.io sideEffects: None webhookPath: /convert conversionCRDs: - webhooktests.webhook.operators.coreos.io 5", "- displayName: MongoDB Standalone group: mongodb.com kind: MongoDbStandalone name: mongodbstandalones.mongodb.com resources: - kind: Service name: '' version: v1 - kind: StatefulSet name: '' version: v1beta2 - kind: Pod name: '' version: v1 - kind: ConfigMap name: '' version: v1 specDescriptors: - description: Credentials for Ops Manager or Cloud Manager. displayName: Credentials path: credentials x-descriptors: - 'urn:alm:descriptor:com.tectonic.ui:selector:core:v1:Secret' - description: Project this deployment belongs to. displayName: Project path: project x-descriptors: - 'urn:alm:descriptor:com.tectonic.ui:selector:core:v1:ConfigMap' - description: MongoDB version to be installed. displayName: Version path: version x-descriptors: - 'urn:alm:descriptor:com.tectonic.ui:label' statusDescriptors: - description: The status of each of the pods for the MongoDB cluster. displayName: Pod Status path: pods x-descriptors: - 'urn:alm:descriptor:com.tectonic.ui:podStatuses' version: v1 description: >- MongoDB Deployment consisting of only one host. No replication of data.", "required: - name: etcdclusters.etcd.database.coreos.com version: v1beta2 kind: EtcdCluster displayName: etcd Cluster description: Represents a cluster of etcd nodes.", "versions: - name: v1alpha1 served: true storage: false - name: v1beta1 1 served: true storage: true", "customresourcedefinitions: owned: - name: cluster.example.com version: v1beta1 1 kind: cluster displayName: Cluster", "versions: - name: v1alpha1 served: false 1 storage: true", "versions: - name: v1alpha1 served: false storage: false 1 - name: v1beta1 served: true storage: true 2", "versions: - name: v1beta1 served: true storage: true", "metadata: annotations: alm-examples: >- [{\"apiVersion\":\"etcd.database.coreos.com/v1beta2\",\"kind\":\"EtcdCluster\",\"metadata\":{\"name\":\"example\",\"namespace\":\"<operator_namespace>\"},\"spec\":{\"size\":3,\"version\":\"3.2.13\"}},{\"apiVersion\":\"etcd.database.coreos.com/v1beta2\",\"kind\":\"EtcdRestore\",\"metadata\":{\"name\":\"example-etcd-cluster\"},\"spec\":{\"etcdCluster\":{\"name\":\"example-etcd-cluster\"},\"backupStorageType\":\"S3\",\"s3\":{\"path\":\"<full-s3-path>\",\"awsSecret\":\"<aws-secret>\"}}},{\"apiVersion\":\"etcd.database.coreos.com/v1beta2\",\"kind\":\"EtcdBackup\",\"metadata\":{\"name\":\"example-etcd-cluster-backup\"},\"spec\":{\"etcdEndpoints\":[\"<etcd-cluster-endpoints>\"],\"storageType\":\"S3\",\"s3\":{\"path\":\"<full-s3-path>\",\"awsSecret\":\"<aws-secret>\"}}}]", "apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: my-operator-v1.2.3 annotations: operators.operatorframework.io/internal-objects: '[\"my.internal.crd1.io\",\"my.internal.crd2.io\"]' 1", "apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: my-operator-v1.2.3 annotations: operatorframework.io/initialization-resource: |- { \"apiVersion\": \"ocs.openshift.io/v1\", \"kind\": \"StorageCluster\", \"metadata\": { \"name\": \"example-storagecluster\" }, \"spec\": { \"manageNodes\": false, \"monPVCTemplate\": { \"spec\": { \"accessModes\": [ \"ReadWriteOnce\" ], \"resources\": { \"requests\": { \"storage\": \"10Gi\" } }, \"storageClassName\": \"gp2\" } }, \"storageDeviceSets\": [ { \"count\": 3, \"dataPVCTemplate\": { \"spec\": { \"accessModes\": [ \"ReadWriteOnce\" ], \"resources\": { \"requests\": { \"storage\": \"1Ti\" } }, \"storageClassName\": \"gp2\", \"volumeMode\": \"Block\" } }, \"name\": \"example-deviceset\", \"placement\": {}, \"portable\": true, \"resources\": {} } ] } }", "make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>", "make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>", "make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>", "make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>", "docker push <registry>/<user>/<bundle_image_name>:<tag>", "operator-sdk run bundle \\ 1 -n <namespace> \\ 2 <registry>/<user>/<bundle_image_name>:<tag> 3", "make catalog-build CATALOG_IMG=<registry>/<user>/<index_image_name>:<tag>", "make catalog-push CATALOG_IMG=<registry>/<user>/<index_image_name>:<tag>", "make bundle-build bundle-push catalog-build catalog-push BUNDLE_IMG=<bundle_image_pull_spec> CATALOG_IMG=<index_image_pull_spec>", "IMAGE_TAG_BASE=quay.io/example/my-operator", "make bundle-build bundle-push catalog-build catalog-push", "apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: cs-memcached namespace: <operator_namespace> spec: displayName: My Test publisher: Company sourceType: grpc grpcPodConfig: securityContextConfig: <security_mode> 1 image: quay.io/example/memcached-catalog:v0.0.1 2 updateStrategy: registryPoll: interval: 10m", "oc get catalogsource", "NAME DISPLAY TYPE PUBLISHER AGE cs-memcached My Test grpc Company 4h31m", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-test namespace: <operator_namespace> spec: targetNamespaces: - <operator_namespace>", "\\ufeffapiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: catalogtest namespace: <catalog_namespace> spec: channel: \"alpha\" installPlanApproval: Manual name: catalog source: cs-memcached sourceNamespace: <operator_namespace> startingCSV: memcached-operator.v0.0.1", "oc get og", "NAME AGE my-test 4h40m", "oc get csv", "NAME DISPLAY VERSION REPLACES PHASE memcached-operator.v0.0.1 Test 0.0.1 Succeeded", "oc get pods", "NAME READY STATUS RESTARTS AGE 9098d908802769fbde8bd45255e69710a9f8420a8f3d814abe88b68f8ervdj6 0/1 Completed 0 4h33m catalog-controller-manager-7fd5b7b987-69s4n 2/2 Running 0 4h32m cs-memcached-7622r 1/1 Running 0 4h33m", "operator-sdk run bundle <registry>/<user>/memcached-operator:v0.0.1", "INFO[0006] Creating a File-Based Catalog of the bundle \"quay.io/demo/memcached-operator:v0.0.1\" INFO[0008] Generated a valid File-Based Catalog INFO[0012] Created registry pod: quay-io-demo-memcached-operator-v1-0-1 INFO[0012] Created CatalogSource: memcached-operator-catalog INFO[0012] OperatorGroup \"operator-sdk-og\" created INFO[0012] Created Subscription: memcached-operator-v0-0-1-sub INFO[0015] Approved InstallPlan install-h9666 for the Subscription: memcached-operator-v0-0-1-sub INFO[0015] Waiting for ClusterServiceVersion \"my-project/memcached-operator.v0.0.1\" to reach 'Succeeded' phase INFO[0015] Waiting for ClusterServiceVersion \"\"my-project/memcached-operator.v0.0.1\" to appear INFO[0026] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.1\" phase: Pending INFO[0028] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.1\" phase: Installing INFO[0059] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.1\" phase: Succeeded INFO[0059] OLM has successfully installed \"memcached-operator.v0.0.1\"", "operator-sdk run bundle-upgrade <registry>/<user>/memcached-operator:v0.0.2", "INFO[0002] Found existing subscription with name memcached-operator-v0-0-1-sub and namespace my-project INFO[0002] Found existing catalog source with name memcached-operator-catalog and namespace my-project INFO[0008] Generated a valid Upgraded File-Based Catalog INFO[0009] Created registry pod: quay-io-demo-memcached-operator-v0-0-2 INFO[0009] Updated catalog source memcached-operator-catalog with address and annotations INFO[0010] Deleted previous registry pod with name \"quay-io-demo-memcached-operator-v0-0-1\" INFO[0041] Approved InstallPlan install-gvcjh for the Subscription: memcached-operator-v0-0-1-sub INFO[0042] Waiting for ClusterServiceVersion \"my-project/memcached-operator.v0.0.2\" to reach 'Succeeded' phase INFO[0019] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.2\" phase: Pending INFO[0042] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.2\" phase: InstallReady INFO[0043] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.2\" phase: Installing INFO[0044] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.2\" phase: Succeeded INFO[0044] Successfully upgraded to \"memcached-operator.v0.0.2\"", "operator-sdk cleanup memcached-operator", "apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: annotations: \"olm.properties\": '[{\"type\": \"olm.maxOpenShiftVersion\", \"value\": \"<cluster_version>\"}]' 1", "com.redhat.openshift.versions: \"v4.7-v4.9\" 1", "LABEL com.redhat.openshift.versions=\"<versions>\" 1", "spec: securityContext: seccompProfile: type: RuntimeDefault 1 runAsNonRoot: true containers: - name: <operator_workload_container> securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL", "spec: securityContext: 1 runAsNonRoot: true containers: - name: <operator_workload_container> securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL", "containers: - name: my-container securityContext: allowPrivilegeEscalation: false capabilities: add: - \"NET_ADMIN\"", "install: spec: clusterPermissions: - rules: - apiGroups: - security.openshift.io resourceNames: - privileged resources: - securitycontextconstraints verbs: - use serviceAccountName: default", "spec: apiservicedefinitions:{} description: The <operator_name> requires a privileged pod security admission label set on the Operator's namespace. The Operator's agents require escalated permissions to restart the node if the node needs remediation.", "install: spec: clusterPermissions: - rules: - apiGroups: - \"cloudcredential.openshift.io\" resources: - credentialsrequests verbs: - create - delete - get - list - patch - update - watch", "metadata: annotations: features.operators.openshift.io/token-auth-aws: \"true\"", "// Get ENV var roleARN := os.Getenv(\"ROLEARN\") setupLog.Info(\"getting role ARN\", \"role ARN = \", roleARN) webIdentityTokenPath := \"/var/run/secrets/openshift/serviceaccount/token\"", "import ( minterv1 \"github.com/openshift/cloud-credential-operator/pkg/apis/cloudcredential/v1\" corev1 \"k8s.io/api/core/v1\" metav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\" ) var in = minterv1.AWSProviderSpec{ StatementEntries: []minterv1.StatementEntry{ { Action: []string{ \"s3:*\", }, Effect: \"Allow\", Resource: \"arn:aws:s3:*:*:*\", }, }, STSIAMRoleARN: \"<role_arn>\", } var codec = minterv1.Codec var ProviderSpec, _ = codec.EncodeProviderSpec(in.DeepCopyObject()) const ( name = \"<credential_request_name>\" namespace = \"<namespace_name>\" ) var CredentialsRequestTemplate = &minterv1.CredentialsRequest{ ObjectMeta: metav1.ObjectMeta{ Name: name, Namespace: \"openshift-cloud-credential-operator\", }, Spec: minterv1.CredentialsRequestSpec{ ProviderSpec: ProviderSpec, SecretRef: corev1.ObjectReference{ Name: \"<secret_name>\", Namespace: namespace, }, ServiceAccountNames: []string{ \"<service_account_name>\", }, CloudTokenPath: \"\", }, }", "// CredentialsRequest is a struct that represents a request for credentials type CredentialsRequest struct { APIVersion string `yaml:\"apiVersion\"` Kind string `yaml:\"kind\"` Metadata struct { Name string `yaml:\"name\"` Namespace string `yaml:\"namespace\"` } `yaml:\"metadata\"` Spec struct { SecretRef struct { Name string `yaml:\"name\"` Namespace string `yaml:\"namespace\"` } `yaml:\"secretRef\"` ProviderSpec struct { APIVersion string `yaml:\"apiVersion\"` Kind string `yaml:\"kind\"` StatementEntries []struct { Effect string `yaml:\"effect\"` Action []string `yaml:\"action\"` Resource string `yaml:\"resource\"` } `yaml:\"statementEntries\"` STSIAMRoleARN string `yaml:\"stsIAMRoleARN\"` } `yaml:\"providerSpec\"` // added new field CloudTokenPath string `yaml:\"cloudTokenPath\"` } `yaml:\"spec\"` } // ConsumeCredsRequestAddingTokenInfo is a function that takes a YAML filename and two strings as arguments // It unmarshals the YAML file to a CredentialsRequest object and adds the token information. func ConsumeCredsRequestAddingTokenInfo(fileName, tokenString, tokenPath string) (*CredentialsRequest, error) { // open a file containing YAML form of a CredentialsRequest file, err := os.Open(fileName) if err != nil { return nil, err } defer file.Close() // create a new CredentialsRequest object cr := &CredentialsRequest{} // decode the yaml file to the object decoder := yaml.NewDecoder(file) err = decoder.Decode(cr) if err != nil { return nil, err } // assign the string to the existing field in the object cr.Spec.CloudTokenPath = tokenPath // return the modified object return cr, nil }", "// apply credentialsRequest on install credReq := credreq.CredentialsRequestTemplate credReq.Spec.CloudTokenPath = webIdentityTokenPath c := mgr.GetClient() if err := c.Create(context.TODO(), credReq); err != nil { if !errors.IsAlreadyExists(err) { setupLog.Error(err, \"unable to create CredRequest\") os.Exit(1) } }", "// WaitForSecret is a function that takes a Kubernetes client, a namespace, and a v1 \"k8s.io/api/core/v1\" name as arguments // It waits until the secret object with the given name exists in the given namespace // It returns the secret object or an error if the timeout is exceeded func WaitForSecret(client kubernetes.Interface, namespace, name string) (*v1.Secret, error) { // set a timeout of 10 minutes timeout := time.After(10 * time.Minute) 1 // set a polling interval of 10 seconds ticker := time.NewTicker(10 * time.Second) // loop until the timeout or the secret is found for { select { case <-timeout: // timeout is exceeded, return an error return nil, fmt.Errorf(\"timed out waiting for secret %s in namespace %s\", name, namespace) // add to this error with a pointer to instructions for following a manual path to a Secret that will work on STS case <-ticker.C: // polling interval is reached, try to get the secret secret, err := client.CoreV1().Secrets(namespace).Get(context.Background(), name, metav1.GetOptions{}) if err != nil { if errors.IsNotFound(err) { // secret does not exist yet, continue waiting continue } else { // some other error occurred, return it return nil, err } } else { // secret is found, return it return secret, nil } } } }", "func SharedCredentialsFileFromSecret(secret *corev1.Secret) (string, error) { var data []byte switch { case len(secret.Data[\"credentials\"]) > 0: data = secret.Data[\"credentials\"] default: return \"\", errors.New(\"invalid secret for aws credentials\") } f, err := ioutil.TempFile(\"\", \"aws-shared-credentials\") if err != nil { return \"\", errors.Wrap(err, \"failed to create file for shared credentials\") } defer f.Close() if _, err := f.Write(data); err != nil { return \"\", errors.Wrapf(err, \"failed to write credentials to %s\", f.Name()) } return f.Name(), nil }", "sharedCredentialsFile, err := SharedCredentialsFileFromSecret(secret) if err != nil { // handle error } options := session.Options{ SharedConfigState: session.SharedConfigEnable, SharedConfigFiles: []string{sharedCredentialsFile}, }", "#!/bin/bash set -x AWS_ACCOUNT_ID=USD(aws sts get-caller-identity --query \"Account\" --output text) OIDC_PROVIDER=USD(oc get authentication cluster -ojson | jq -r .spec.serviceAccountIssuer | sed -e \"s/^https:\\/\\///\") NAMESPACE=my-namespace SERVICE_ACCOUNT_NAME=\"my-service-account\" POLICY_ARN_STRINGS=\"arn:aws:iam::aws:policy/AmazonS3FullAccess\" read -r -d '' TRUST_RELATIONSHIP <<EOF { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"Federated\": \"arn:aws:iam::USD{AWS_ACCOUNT_ID}:oidc-provider/USD{OIDC_PROVIDER}\" }, \"Action\": \"sts:AssumeRoleWithWebIdentity\", \"Condition\": { \"StringEquals\": { \"USD{OIDC_PROVIDER}:sub\": \"system:serviceaccount:USD{NAMESPACE}:USD{SERVICE_ACCOUNT_NAME}\" } } } ] } EOF echo \"USD{TRUST_RELATIONSHIP}\" > trust.json aws iam create-role --role-name \"USDSERVICE_ACCOUNT_NAME\" --assume-role-policy-document file://trust.json --description \"role for demo\" while IFS= read -r POLICY_ARN; do echo -n \"Attaching USDPOLICY_ARN ... \" aws iam attach-role-policy --role-name \"USDSERVICE_ACCOUNT_NAME\" --policy-arn \"USD{POLICY_ARN}\" echo \"ok.\" done <<< \"USDPOLICY_ARN_STRINGS\"", "oc exec operator-pod -n <namespace_name> -- cat /var/run/secrets/openshift/serviceaccount/token", "oc exec operator-pod -n <namespace_name> -- cat /<path>/<to>/<secret_name> 1", "aws sts assume-role-with-web-identity --role-arn USDROLEARN --role-session-name <session_name> --web-identity-token USDTOKEN", "install: spec: clusterPermissions: - rules: - apiGroups: - \"cloudcredential.openshift.io\" resources: - credentialsrequests verbs: - create - delete - get - list - patch - update - watch", "metadata: annotations: features.operators.openshift.io/token-auth-azure: \"true\"", "// Get ENV var clientID := os.Getenv(\"CLIENTID\") tenantID := os.Getenv(\"TENANTID\") subscriptionID := os.Getenv(\"SUBSCRIPTIONID\") azureFederatedTokenFile := \"/var/run/secrets/openshift/serviceaccount/token\"", "// apply credentialsRequest on install credReqTemplate.Spec.AzureProviderSpec.AzureClientID = clientID credReqTemplate.Spec.AzureProviderSpec.AzureTenantID = tenantID credReqTemplate.Spec.AzureProviderSpec.AzureRegion = \"centralus\" credReqTemplate.Spec.AzureProviderSpec.AzureSubscriptionID = subscriptionID credReqTemplate.CloudTokenPath = azureFederatedTokenFile c := mgr.GetClient() if err := c.Create(context.TODO(), credReq); err != nil { if !errors.IsAlreadyExists(err) { setupLog.Error(err, \"unable to create CredRequest\") os.Exit(1) } }", "// WaitForSecret is a function that takes a Kubernetes client, a namespace, and a v1 \"k8s.io/api/core/v1\" name as arguments // It waits until the secret object with the given name exists in the given namespace // It returns the secret object or an error if the timeout is exceeded func WaitForSecret(client kubernetes.Interface, namespace, name string) (*v1.Secret, error) { // set a timeout of 10 minutes timeout := time.After(10 * time.Minute) 1 // set a polling interval of 10 seconds ticker := time.NewTicker(10 * time.Second) // loop until the timeout or the secret is found for { select { case <-timeout: // timeout is exceeded, return an error return nil, fmt.Errorf(\"timed out waiting for secret %s in namespace %s\", name, namespace) // add to this error with a pointer to instructions for following a manual path to a Secret that will work on STS case <-ticker.C: // polling interval is reached, try to get the secret secret, err := client.CoreV1().Secrets(namespace).Get(context.Background(), name, metav1.GetOptions{}) if err != nil { if errors.IsNotFound(err) { // secret does not exist yet, continue waiting continue } else { // some other error occurred, return it return nil, err } } else { // secret is found, return it return secret, nil } } } }", "operator-sdk scorecard <bundle_dir_or_image> [flags]", "operator-sdk scorecard -h", "./bundle └── tests └── scorecard └── config.yaml", "kind: Configuration apiversion: scorecard.operatorframework.io/v1alpha3 metadata: name: config stages: - parallel: true tests: - image: quay.io/operator-framework/scorecard-test:v1.36.1 entrypoint: - scorecard-test - basic-check-spec labels: suite: basic test: basic-check-spec-test - image: quay.io/operator-framework/scorecard-test:v1.36.1 entrypoint: - scorecard-test - olm-bundle-validation labels: suite: olm test: olm-bundle-validation-test", "make bundle", "operator-sdk scorecard <bundle_dir_or_image>", "{ \"apiVersion\": \"scorecard.operatorframework.io/v1alpha3\", \"kind\": \"TestList\", \"items\": [ { \"kind\": \"Test\", \"apiVersion\": \"scorecard.operatorframework.io/v1alpha3\", \"spec\": { \"image\": \"quay.io/operator-framework/scorecard-test:v1.36.1\", \"entrypoint\": [ \"scorecard-test\", \"olm-bundle-validation\" ], \"labels\": { \"suite\": \"olm\", \"test\": \"olm-bundle-validation-test\" } }, \"status\": { \"results\": [ { \"name\": \"olm-bundle-validation\", \"log\": \"time=\\\"2020-06-10T19:02:49Z\\\" level=debug msg=\\\"Found manifests directory\\\" name=bundle-test\\ntime=\\\"2020-06-10T19:02:49Z\\\" level=debug msg=\\\"Found metadata directory\\\" name=bundle-test\\ntime=\\\"2020-06-10T19:02:49Z\\\" level=debug msg=\\\"Getting mediaType info from manifests directory\\\" name=bundle-test\\ntime=\\\"2020-06-10T19:02:49Z\\\" level=info msg=\\\"Found annotations file\\\" name=bundle-test\\ntime=\\\"2020-06-10T19:02:49Z\\\" level=info msg=\\\"Could not find optional dependencies file\\\" name=bundle-test\\n\", \"state\": \"pass\" } ] } } ] }", "-------------------------------------------------------------------------------- Image: quay.io/operator-framework/scorecard-test:v1.36.1 Entrypoint: [scorecard-test olm-bundle-validation] Labels: \"suite\":\"olm\" \"test\":\"olm-bundle-validation-test\" Results: Name: olm-bundle-validation State: pass Log: time=\"2020-07-15T03:19:02Z\" level=debug msg=\"Found manifests directory\" name=bundle-test time=\"2020-07-15T03:19:02Z\" level=debug msg=\"Found metadata directory\" name=bundle-test time=\"2020-07-15T03:19:02Z\" level=debug msg=\"Getting mediaType info from manifests directory\" name=bundle-test time=\"2020-07-15T03:19:02Z\" level=info msg=\"Found annotations file\" name=bundle-test time=\"2020-07-15T03:19:02Z\" level=info msg=\"Could not find optional dependencies file\" name=bundle-test", "operator-sdk scorecard <bundle_dir_or_image> -o text --selector=test=basic-check-spec-test", "operator-sdk scorecard <bundle_dir_or_image> -o text --selector=suite=olm", "operator-sdk scorecard <bundle_dir_or_image> -o text --selector='test in (basic-check-spec-test,olm-bundle-validation-test)'", "apiVersion: scorecard.operatorframework.io/v1alpha3 kind: Configuration metadata: name: config stages: - parallel: true 1 tests: - entrypoint: - scorecard-test - basic-check-spec image: quay.io/operator-framework/scorecard-test:v1.36.1 labels: suite: basic test: basic-check-spec-test - entrypoint: - scorecard-test - olm-bundle-validation image: quay.io/operator-framework/scorecard-test:v1.36.1 labels: suite: olm test: olm-bundle-validation-test", "// Copyright 2020 The Operator-SDK Authors // // Licensed under the Apache License, Version 2.0 (the \"License\"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an \"AS IS\" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package main import ( \"encoding/json\" \"fmt\" \"log\" \"os\" scapiv1alpha3 \"github.com/operator-framework/api/pkg/apis/scorecard/v1alpha3\" apimanifests \"github.com/operator-framework/api/pkg/manifests\" ) // This is the custom scorecard test example binary // As with the Redhat scorecard test image, the bundle that is under // test is expected to be mounted so that tests can inspect the // bundle contents as part of their test implementations. // The actual test is to be run is named and that name is passed // as an argument to this binary. This argument mechanism allows // this binary to run various tests all from within a single // test image. const PodBundleRoot = \"/bundle\" func main() { entrypoint := os.Args[1:] if len(entrypoint) == 0 { log.Fatal(\"Test name argument is required\") } // Read the pod's untar'd bundle from a well-known path. cfg, err := apimanifests.GetBundleFromDir(PodBundleRoot) if err != nil { log.Fatal(err.Error()) } var result scapiv1alpha3.TestStatus // Names of the custom tests which would be passed in the // `operator-sdk` command. switch entrypoint[0] { case CustomTest1Name: result = CustomTest1(cfg) case CustomTest2Name: result = CustomTest2(cfg) default: result = printValidTests() } // Convert scapiv1alpha3.TestResult to json. prettyJSON, err := json.MarshalIndent(result, \"\", \" \") if err != nil { log.Fatal(\"Failed to generate json\", err) } fmt.Printf(\"%s\\n\", string(prettyJSON)) } // printValidTests will print out full list of test names to give a hint to the end user on what the valid tests are. func printValidTests() scapiv1alpha3.TestStatus { result := scapiv1alpha3.TestResult{} result.State = scapiv1alpha3.FailState result.Errors = make([]string, 0) result.Suggestions = make([]string, 0) str := fmt.Sprintf(\"Valid tests for this image include: %s %s\", CustomTest1Name, CustomTest2Name) result.Errors = append(result.Errors, str) return scapiv1alpha3.TestStatus{ Results: []scapiv1alpha3.TestResult{result}, } } const ( CustomTest1Name = \"customtest1\" CustomTest2Name = \"customtest2\" ) // Define any operator specific custom tests here. // CustomTest1 and CustomTest2 are example test functions. Relevant operator specific // test logic is to be implemented in similarly. func CustomTest1(bundle *apimanifests.Bundle) scapiv1alpha3.TestStatus { r := scapiv1alpha3.TestResult{} r.Name = CustomTest1Name r.State = scapiv1alpha3.PassState r.Errors = make([]string, 0) r.Suggestions = make([]string, 0) almExamples := bundle.CSV.GetAnnotations()[\"alm-examples\"] if almExamples == \"\" { fmt.Println(\"no alm-examples in the bundle CSV\") } return wrapResult(r) } func CustomTest2(bundle *apimanifests.Bundle) scapiv1alpha3.TestStatus { r := scapiv1alpha3.TestResult{} r.Name = CustomTest2Name r.State = scapiv1alpha3.PassState r.Errors = make([]string, 0) r.Suggestions = make([]string, 0) almExamples := bundle.CSV.GetAnnotations()[\"alm-examples\"] if almExamples == \"\" { fmt.Println(\"no alm-examples in the bundle CSV\") } return wrapResult(r) } func wrapResult(r scapiv1alpha3.TestResult) scapiv1alpha3.TestStatus { return scapiv1alpha3.TestStatus{ Results: []scapiv1alpha3.TestResult{r}, } }", "operator-sdk bundle validate <bundle_dir_or_image> <flags>", "./bundle ├── manifests │ ├── cache.my.domain_memcacheds.yaml │ └── memcached-operator.clusterserviceversion.yaml └── metadata └── annotations.yaml", "INFO[0000] All validation tests have completed successfully", "ERRO[0000] Error: Value cache.example.com/v1alpha1, Kind=Memcached: CRD \"cache.example.com/v1alpha1, Kind=Memcached\" is present in bundle \"\" but not defined in CSV", "WARN[0000] Warning: Value : (memcached-operator.v0.0.1) annotations not found INFO[0000] All validation tests have completed successfully", "operator-sdk bundle validate -h", "operator-sdk bundle validate <bundle_dir_or_image> --select-optional <test_label>", "operator-sdk bundle validate ./bundle", "operator-sdk bundle validate <bundle_registry>/<bundle_image_name>:<tag>", "operator-sdk bundle validate <bundle_dir_or_image> --select-optional <test_label>", "ERRO[0000] Error: Value apiextensions.k8s.io/v1, Kind=CustomResource: unsupported media type registry+v1 for bundle object WARN[0000] Warning: Value k8sevent.v0.0.1: owned CRD \"k8sevents.k8s.k8sevent.com\" has an empty description", "operator-sdk bundle validate ./bundle --select-optional name=multiarch", "INFO[0020] All validation tests have completed successfully", "ERRO[0016] Error: Value test-operator.v0.0.1: not all images specified are providing the support described via the CSV labels. Note that (SO.architecture): (linux.ppc64le) was not found for the image(s) [quay.io/example-org/test-operator:v1alpha1] ERRO[0016] Error: Value test-operator.v0.0.1: not all images specified are providing the support described via the CSV labels. Note that (SO.architecture): (linux.s390x) was not found for the image(s) [quay.io/example-org/test-operator:v1alpha1] ERRO[0016] Error: Value test-operator.v0.0.1: not all images specified are providing the support described via the CSV labels. Note that (SO.architecture): (linux.amd64) was not found for the image(s) [quay.io/example-org/test-operator:v1alpha1] ERRO[0016] Error: Value test-operator.v0.0.1: not all images specified are providing the support described via the CSV labels. Note that (SO.architecture): (linux.arm64) was not found for the image(s) [quay.io/example-org/test-operator:v1alpha1]", "WARN[0014] Warning: Value test-operator.v0.0.1: check if the CSV is missing the label (operatorframework.io/arch.<value>) for the Arch(s): [\"amd64\" \"arm64\" \"ppc64le\" \"s390x\"]. Be aware that your Operator manager image [\"quay.io/example-org/test-operator:v1alpha1\"] provides this support. Thus, it is very likely that you want to provide it and if you support more than amd64 architectures, you MUST,use the required labels for all which are supported.Otherwise, your solution cannot be listed on the cluster for these architectures", "// Simple query nn := types.NamespacedName{ Name: \"cluster\", } infraConfig := &configv1.Infrastructure{} err = crClient.Get(context.Background(), nn, infraConfig) if err != nil { return err } fmt.Printf(\"using crclient: %v\\n\", infraConfig.Status.ControlPlaneTopology) fmt.Printf(\"using crclient: %v\\n\", infraConfig.Status.InfrastructureTopology)", "operatorConfigInformer := configinformer.NewSharedInformerFactoryWithOptions(configClient, 2*time.Second) infrastructureLister = operatorConfigInformer.Config().V1().Infrastructures().Lister() infraConfig, err := configClient.ConfigV1().Infrastructures().Get(context.Background(), \"cluster\", metav1.GetOptions{}) if err != nil { return err } // fmt.Printf(\"%v\\n\", infraConfig) fmt.Printf(\"%v\\n\", infraConfig.Status.ControlPlaneTopology) fmt.Printf(\"%v\\n\", infraConfig.Status.InfrastructureTopology)", "../prometheus", "package controllers import ( \"github.com/prometheus/client_golang/prometheus\" \"sigs.k8s.io/controller-runtime/pkg/metrics\" ) var ( widgets = prometheus.NewCounter( prometheus.CounterOpts{ Name: \"widgets_total\", Help: \"Number of widgets processed\", }, ) widgetFailures = prometheus.NewCounter( prometheus.CounterOpts{ Name: \"widget_failures_total\", Help: \"Number of failed widgets\", }, ) ) func init() { // Register custom metrics with the global prometheus registry metrics.Registry.MustRegister(widgets, widgetFailures) }", "func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { // Add metrics widgets.Inc() widgetFailures.Inc() return ctrl.Result{}, nil }", "make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>", "make deploy IMG=<registry>/<user>/<image_name>:<tag>", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: prometheus-k8s-role namespace: memcached-operator-system rules: - apiGroups: - \"\" resources: - endpoints - pods - services - nodes - secrets verbs: - get - list - watch", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: prometheus-k8s-rolebinding namespace: memcached-operator-system roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: prometheus-k8s-role subjects: - kind: ServiceAccount name: prometheus-k8s namespace: openshift-monitoring", "oc apply -f config/prometheus/role.yaml", "oc apply -f config/prometheus/rolebinding.yaml", "oc label namespace <operator_namespace> openshift.io/cluster-monitoring=\"true\"", "operator-sdk init --plugins=ansible --domain=testmetrics.com", "operator-sdk create api --group metrics --version v1 --kind Testmetrics --generate-role", "--- tasks file for Memcached - name: start k8sstatus k8s: definition: kind: Deployment apiVersion: apps/v1 metadata: name: '{{ ansible_operator_meta.name }}-memcached' namespace: '{{ ansible_operator_meta.namespace }}' spec: replicas: \"{{size}}\" selector: matchLabels: app: memcached template: metadata: labels: app: memcached spec: containers: - name: memcached command: - memcached - -m=64 - -o - modern - -v image: \"docker.io/memcached:1.4.36-alpine\" ports: - containerPort: 11211 - osdk_metric: name: my_thing_counter description: This metric counts things counter: {} - osdk_metric: name: my_counter_metric description: Add 3.14 to the counter counter: increment: yes - osdk_metric: name: my_gauge_metric description: Create my gauge and set it to 2. gauge: set: 2 - osdk_metric: name: my_histogram_metric description: Observe my histogram histogram: observe: 2 - osdk_metric: name: my_summary_metric description: Observe my summary summary: observe: 2", "make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>", "make install", "make deploy IMG=<registry>/<user>/<image_name>:<tag>", "apiVersion: metrics.testmetrics.com/v1 kind: Testmetrics metadata: name: testmetrics-sample spec: size: 1", "oc create -f config/samples/metrics_v1_testmetrics.yaml", "oc get pods", "NAME READY STATUS RESTARTS AGE ansiblemetrics-controller-manager-<id> 2/2 Running 0 149m testmetrics-sample-memcached-<id> 1/1 Running 0 147m", "oc get ep", "NAME ENDPOINTS AGE ansiblemetrics-controller-manager-metrics-service 10.129.2.70:8443 150m", "token=`oc create token prometheus-k8s -n openshift-monitoring`", "oc exec ansiblemetrics-controller-manager-<id> -- curl -k -H \"Authoriza tion: Bearer USDtoken\" 'https://10.129.2.70:8443/metrics' | grep my_counter", "HELP my_counter_metric Add 3.14 to the counter TYPE my_counter_metric counter my_counter_metric 2", "oc exec ansiblemetrics-controller-manager-<id> -- curl -k -H \"Authoriza tion: Bearer USDtoken\" 'https://10.129.2.70:8443/metrics' | grep gauge", "HELP my_gauge_metric Create my gauge and set it to 2.", "oc exec ansiblemetrics-controller-manager-<id> -- curl -k -H \"Authoriza tion: Bearer USDtoken\" 'https://10.129.2.70:8443/metrics' | grep Observe", "HELP my_histogram_metric Observe my histogram HELP my_summary_metric Observe my summary", "import ( \"github.com/operator-framework/operator-sdk/pkg/leader\" ) func main() { err = leader.Become(context.TODO(), \"memcached-operator-lock\") if err != nil { log.Error(err, \"Failed to retry for leader lock\") os.Exit(1) } }", "import ( \"sigs.k8s.io/controller-runtime/pkg/manager\" ) func main() { opts := manager.Options{ LeaderElection: true, LeaderElectionID: \"memcached-operator-lock\" } mgr, err := manager.New(cfg, opts) }", "docker manifest inspect <image_manifest> 1", "{ \"manifests\": [ { \"digest\": \"sha256:c0669ef34cdc14332c0f1ab0c2c01acb91d96014b172f1a76f3a39e63d1f0bda\", \"mediaType\": \"application/vnd.docker.distribution.manifest.v2+json\", \"platform\": { \"architecture\": \"amd64\", \"os\": \"linux\" }, \"size\": 528 }, { \"digest\": \"sha256:30e6d35703c578ee703230b9dc87ada2ba958c1928615ac8a674fcbbcbb0f281\", \"mediaType\": \"application/vnd.docker.distribution.manifest.v2+json\", \"platform\": { \"architecture\": \"arm64\", \"os\": \"linux\", \"variant\": \"v8\" }, \"size\": 528 }, ] }", "docker inspect <image>", "FROM golang:1.19 as builder ARG TARGETOS ARG TARGETARCH RUN CGO_ENABLED=0 GOOS=USD{TARGETOS:-linux} GOARCH=USD{TARGETARCH} go build -a -o manager main.go 1", "PLATFORMS ?= linux/arm64,linux/amd64 1 .PHONY: docker-buildx", "make docker-buildx IMG=<image_registry>/<organization_name>/<repository_name>:<version_or_sha>", "apiVersion: v1 kind: Pod metadata: name: s1 spec: containers: - name: <container_name> image: docker.io/<org>/<image_name>", "apiVersion: v1 kind: Pod metadata: name: s1 spec: containers: - name: <container_name> image: docker.io/<org>/<image_name> affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: 1 nodeSelectorTerms: 2 - matchExpressions: 3 - key: kubernetes.io/arch 4 operator: In values: - amd64 - arm64 - ppc64le - s390x - key: kubernetes.io/os 5 operator: In values: - linux", "Template: corev1.PodTemplateSpec{ Spec: corev1.PodSpec{ Affinity: &corev1.Affinity{ NodeAffinity: &corev1.NodeAffinity{ RequiredDuringSchedulingIgnoredDuringExecution: &corev1.NodeSelector{ NodeSelectorTerms: []corev1.NodeSelectorTerm{ { MatchExpressions: []corev1.NodeSelectorRequirement{ { Key: \"kubernetes.io/arch\", Operator: \"In\", Values: []string{\"amd64\",\"arm64\",\"ppc64le\",\"s390x\"}, }, { Key: \"kubernetes.io/os\", Operator: \"In\", Values: []string{\"linux\"}, }, }, }, }, }, }, }, SecurityContext: &corev1.PodSecurityContext{ }, Containers: []corev1.Container{{ }}, },", "apiVersion: v1 kind: Pod metadata: name: s1 spec: containers: - name: <container_name> image: docker.io/<org>/<image_name>", "apiVersion: v1 kind: Pod metadata: name: s1 spec: containers: - name: <container_name> image: docker.io/<org>/<image_name> affinity: nodeAffinity: preferredDuringSchedulingIgnoredDuringExecution: 1 - preference: matchExpressions: 2 - key: kubernetes.io/arch 3 operator: In 4 values: - amd64 - arm64 weight: 90 5", "cfg = Config{ log: logf.Log.WithName(\"prune\"), DryRun: false, Clientset: client, LabelSelector: \"app=<operator_name>\", Resources: []schema.GroupVersionKind{ {Group: \"\", Version: \"\", Kind: PodKind}, }, Namespaces: []string{\"<operator_namespace>\"}, Strategy: StrategyConfig{ Mode: MaxCountStrategy, MaxCountSetting: 1, }, PreDeleteHook: myhook, }", "err := cfg.Execute(ctx)", "packagemanifests/ └── etcd ├── 0.0.1 │ ├── etcdcluster.crd.yaml │ └── etcdoperator.clusterserviceversion.yaml ├── 0.0.2 │ ├── etcdbackup.crd.yaml │ ├── etcdcluster.crd.yaml │ ├── etcdoperator.v0.0.2.clusterserviceversion.yaml │ └── etcdrestore.crd.yaml └── etcd.package.yaml", "bundle/ ├── bundle-0.0.1 │ ├── bundle.Dockerfile │ ├── manifests │ │ ├── etcdcluster.crd.yaml │ │ ├── etcdoperator.clusterserviceversion.yaml │ ├── metadata │ │ └── annotations.yaml │ └── tests │ └── scorecard │ └── config.yaml └── bundle-0.0.2 ├── bundle.Dockerfile ├── manifests │ ├── etcdbackup.crd.yaml │ ├── etcdcluster.crd.yaml │ ├── etcdoperator.v0.0.2.clusterserviceversion.yaml │ ├── etcdrestore.crd.yaml ├── metadata │ └── annotations.yaml └── tests └── scorecard └── config.yaml", "operator-sdk pkgman-to-bundle <package_manifests_dir> \\ 1 [--output-dir <directory>] \\ 2 --image-tag-base <image_name_base> 3", "operator-sdk run bundle <bundle_image_name>:<tag>", "INFO[0025] Successfully created registry pod: quay-io-my-etcd-0-9-4 INFO[0025] Created CatalogSource: etcd-catalog INFO[0026] OperatorGroup \"operator-sdk-og\" created INFO[0026] Created Subscription: etcdoperator-v0-9-4-sub INFO[0031] Approved InstallPlan install-5t58z for the Subscription: etcdoperator-v0-9-4-sub INFO[0031] Waiting for ClusterServiceVersion \"default/etcdoperator.v0.9.4\" to reach 'Succeeded' phase INFO[0032] Waiting for ClusterServiceVersion \"default/etcdoperator.v0.9.4\" to appear INFO[0048] Found ClusterServiceVersion \"default/etcdoperator.v0.9.4\" phase: Pending INFO[0049] Found ClusterServiceVersion \"default/etcdoperator.v0.9.4\" phase: Installing INFO[0064] Found ClusterServiceVersion \"default/etcdoperator.v0.9.4\" phase: Succeeded INFO[0065] OLM has successfully installed \"etcdoperator.v0.9.4\"", "operator-sdk <command> [<subcommand>] [<argument>] [<flags>]", "operator-sdk completion bash", "bash completion for operator-sdk -*- shell-script -*- ex: ts=4 sw=4 et filetype=sh", "oc -n [namespace] edit cm hw-event-proxy-operator-manager-config", "apiVersion: controller-runtime.sigs.k8s.io/v1alpha1 kind: ControllerManagerConfig health: healthProbeBindAddress: :8081 metrics: bindAddress: 127.0.0.1:8080 webhook: port: 9443 leaderElection: leaderElect: true resourceName: 6e7a703c.redhat-cne.org", "oc get clusteroperator authentication -o yaml", "oc -n openshift-monitoring edit cm cluster-monitoring-config", "oc edit etcd cluster", "oc get clusteringresses.ingress.openshift.io -n openshift-ingress-operator default -o yaml", "oc get deployment -n openshift-ingress", "oc get network/cluster -o jsonpath='{.status.clusterNetwork[*]}'", "map[cidr:10.128.0.0/14 hostPrefix:23]", "oc edit kubeapiserver", "oc get clusteroperator openshift-controller-manager -o yaml", "oc get crd openshiftcontrollermanagers.operator.openshift.io -o yaml", "apiVersion: olm.operatorframework.io/v1alpha1 kind: ClusterExtension metadata: name: <operator_name> spec: packageName: <package_name> installNamespace: <namespace_name> channel: <channel_name> version: <version_number>", "apiVersion: olm.operatorframework.io/v1alpha1 kind: ClusterExtension metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh installNamespace: <namespace_name> channel: latest 1", "apiVersion: olm.operatorframework.io/v1alpha1 kind: ClusterExtension metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh installNamespace: <namespace_name> version: \"1.11.1\" 1", "apiVersion: olm.operatorframework.io/v1alpha1 kind: ClusterExtension metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh installNamespace: <namespace_name> version: \">1.11.1\" 1", "oc apply -f <extension_name>.yaml", "apiVersion: catalogd.operatorframework.io/v1alpha1 kind: Catalog metadata: name: redhat-operators spec: source: type: image image: ref: registry.redhat.io/redhat/redhat-operator-index:v4.16 pullSecret: <pull_secret_name> pollInterval: <poll_interval_duration> 1", "apiVersion: catalogd.operatorframework.io/v1alpha1 kind: Catalog metadata: name: certified-operators spec: source: type: image image: ref: registry.redhat.io/redhat/certified-operator-index:v4.16 pullSecret: <pull_secret_name> pollInterval: 24h", "apiVersion: catalogd.operatorframework.io/v1alpha1 kind: Catalog metadata: name: community-operators spec: source: type: image image: ref: registry.redhat.io/redhat/community-operator-index:v4.16 pullSecret: <pull_secret_name> pollInterval: 24h", "oc apply -f <catalog_name>.yaml 1", "apiVersion: catalogd.operatorframework.io/v1alpha1 kind: Catalog metadata: name: redhat-operators spec: source: type: image image: ref: registry.redhat.io/redhat/redhat-operator-index:v4.16 pullSecret: <pull_secret_name> pollInterval: <poll_interval_duration> 1", "apiVersion: catalogd.operatorframework.io/v1alpha1 kind: Catalog metadata: name: certified-operators spec: source: type: image image: ref: registry.redhat.io/redhat/certified-operator-index:v4.16 pullSecret: <pull_secret_name> pollInterval: 24h", "apiVersion: catalogd.operatorframework.io/v1alpha1 kind: Catalog metadata: name: community-operators spec: source: type: image image: ref: registry.redhat.io/redhat/community-operator-index:v4.16 pullSecret: <pull_secret_name> pollInterval: 24h", "oc apply -f <catalog_name>.yaml 1", "oc create secret generic <pull_secret_name> --from-file=.dockercfg=<file_path>/.dockercfg --type=kubernetes.io/dockercfg --namespace=openshift-catalogd", "oc create secret generic redhat-cred --from-file=.dockercfg=/home/<username>/.dockercfg --type=kubernetes.io/dockercfg --namespace=openshift-catalogd", "oc create secret generic <pull_secret_name> --from-file=.dockerconfigjson=<file_path>/.docker/config.json --type=kubernetes.io/dockerconfigjson --namespace=openshift-catalogd", "oc create secret generic redhat-cred --from-file=.dockerconfigjson=/home/<username>/.docker/config.json --type=kubernetes.io/dockerconfigjson --namespace=openshift-catalogd", "oc create secret docker-registry <pull_secret_name> --docker-server=<registry_server> --docker-username=<username> --docker-password=<password> --docker-email=<email> --namespace=openshift-catalogd", "oc create secret docker-registry redhat-cred --docker-server=registry.redhat.io --docker-username=username --docker-password=password [email protected] --namespace=openshift-catalogd", "apiVersion: catalogd.operatorframework.io/v1alpha1 kind: Catalog metadata: name: redhat-operators spec: source: type: image image: ref: registry.redhat.io/redhat/redhat-operator-index:v4.16 1 pullSecret: <pull_secret_name> 2 pollInterval: <poll_interval_duration> 3", "oc apply -f redhat-operators.yaml", "catalog.catalogd.operatorframework.io/redhat-operators created", "oc get catalog", "NAME AGE redhat-operators 20s", "oc describe catalog", "Name: redhat-operators Namespace: Labels: <none> Annotations: <none> API Version: catalogd.operatorframework.io/v1alpha1 Kind: Catalog Metadata: Creation Timestamp: 2024-06-10T17:34:53Z Finalizers: catalogd.operatorframework.io/delete-server-cache Generation: 1 Resource Version: 46075 UID: 83c0db3c-a553-41da-b279-9b3cddaa117d Spec: Source: Image: Pull Secret: redhat-cred Ref: registry.redhat.io/redhat/redhat-operator-index:v4.16 Type: image Status: 1 Conditions: Last Transition Time: 2024-06-10T17:35:15Z Message: Reason: UnpackSuccessful 2 Status: True Type: Unpacked Content URL: http://catalogd-catalogserver.openshift-catalogd.svc/catalogs/redhat-operators/all.json Observed Generation: 1 Phase: Unpacked 3 Resolved Source: Image: Last Poll Attempt: 2024-06-10T17:35:10Z Ref: registry.redhat.io/redhat/redhat-operator-index:v4.16 Resolved Ref: registry.redhat.io/redhat/redhat-operator-index@sha256:f2ccc079b5e490a50db532d1dc38fd659322594dcf3e653d650ead0e862029d9 4 Type: image Events: <none>", "oc -n openshift-catalogd port-forward svc/catalogd-catalogserver 8080:80", "curl -L http://localhost:8080/catalogs/<catalog_name>/all.json -C - -o /<path>/<catalog_name>.json", "curl -L http://localhost:8080/catalogs/redhat-operators/all.json -C - -o /home/username/catalogs/rhoc.json", "jq -s '.[] | select(.schema == \"olm.package\") | .name' /<path>/<filename>.json", "jq -s '.[] | select(.schema == \"olm.package\") | .name' /home/username/catalogs/rhoc.json", "NAME AGE \"3scale-operator\" \"advanced-cluster-management\" \"amq-broker-rhel8\" \"amq-online\" \"amq-streams\" \"amq7-interconnect-operator\" \"ansible-automation-platform-operator\" \"ansible-cloud-addons-operator\" \"apicast-operator\" \"aws-efs-csi-driver-operator\" \"aws-load-balancer-operator\" \"bamoe-businessautomation-operator\" \"bamoe-kogito-operator\" \"bare-metal-event-relay\" \"businessautomation-operator\"", "jq -c 'select(.schema == \"olm.bundle\") | {\"package\":.package, \"version\":.properties[] | select(.type == \"olm.bundle.object\").value.data | @base64d | fromjson | select(.kind == \"ClusterServiceVersion\" and (.spec.installModes[] | select(.type == \"AllNamespaces\" and .supported == true) != null) and .spec.webhookdefinitions == null).spec.version}' /<path>/<catalog_name>.json", "{\"package\":\"3scale-operator\",\"version\":\"0.10.0-mas\"} {\"package\":\"3scale-operator\",\"version\":\"0.10.5\"} {\"package\":\"3scale-operator\",\"version\":\"0.11.0-mas\"} {\"package\":\"3scale-operator\",\"version\":\"0.11.1-mas\"} {\"package\":\"3scale-operator\",\"version\":\"0.11.2-mas\"} {\"package\":\"3scale-operator\",\"version\":\"0.11.3-mas\"} {\"package\":\"3scale-operator\",\"version\":\"0.11.5-mas\"} {\"package\":\"3scale-operator\",\"version\":\"0.11.6-mas\"} {\"package\":\"3scale-operator\",\"version\":\"0.11.7-mas\"} {\"package\":\"3scale-operator\",\"version\":\"0.11.8-mas\"} {\"package\":\"amq-broker-rhel8\",\"version\":\"7.10.0-opr-1\"} {\"package\":\"amq-broker-rhel8\",\"version\":\"7.10.0-opr-2\"} {\"package\":\"amq-broker-rhel8\",\"version\":\"7.10.0-opr-3\"} {\"package\":\"amq-broker-rhel8\",\"version\":\"7.10.0-opr-4\"} {\"package\":\"amq-broker-rhel8\",\"version\":\"7.10.1-opr-1\"} {\"package\":\"amq-broker-rhel8\",\"version\":\"7.10.1-opr-2\"} {\"package\":\"amq-broker-rhel8\",\"version\":\"7.10.2-opr-1\"} {\"package\":\"amq-broker-rhel8\",\"version\":\"7.10.2-opr-2\"}", "jq -s '.[] | select( .schema == \"olm.package\") | select( .name == \"<package_name>\")' /<path>/<catalog_name>.json", "jq -s '.[] | select( .schema == \"olm.package\") | select( .name == \"openshift-pipelines-operator-rh\")' /home/username/rhoc.json", "{ \"defaultChannel\": \"stable\", \"icon\": { \"base64data\": \"PHN2ZyB4bWxu...\" \"mediatype\": \"image/png\" }, \"name\": \"openshift-pipelines-operator-rh\", \"schema\": \"olm.package\" }", "jq -s '.[] | select( .schema == \"olm.package\") | .name' <catalog_name>.json", "jq -c 'select(.schema == \"olm.bundle\") | {\"package\":.package, \"version\":.properties[] | select(.type == \"olm.bundle.object\").value.data | @base64d | fromjson | select(.kind == \"ClusterServiceVersion\" and (.spec.installModes[] | select(.type == \"AllNamespaces\" and .supported == true) != null) and .spec.webhookdefinitions == null).spec.version}' <catalog_name>.json", "jq -s '.[] | select( .schema == \"olm.package\") | select( .name == \"<package_name>\")' <catalog_name>.json", "jq -s '.[] | select( .package == \"<package_name>\")' <catalog_name>.json", "jq -s '.[] | select( .schema == \"olm.channel\" ) | select( .package == \"<package_name>\") | .name' <catalog_name>.json", "jq -s '.[] | select( .package == \"<package_name>\" ) | select( .schema == \"olm.channel\" ) | select( .name == \"<channel_name>\" ) | .entries | .[] | .name' <catalog_name>.json", "jq -s '.[] | select( .schema == \"olm.channel\" ) | select ( .name == \"<channel>\") | select( .package == \"<package_name>\")' <catalog_name>.json", "jq -s '.[] | select( .schema == \"olm.bundle\" ) | select( .package == \"<package_name>\") | .name' <catalog_name>.json", "jq -s '.[] | select( .schema == \"olm.bundle\" ) | select ( .name == \"<bundle_name>\") | select( .package == \"<package_name>\")' <catalog_name>.json", "jq -s '.[] | select( .schema == \"olm.channel\" ) | select( .package == \"<package_name>\") | .name' /<path>/<catalog_name>.json", "jq -s '.[] | select( .schema == \"olm.channel\" ) | select( .package == \"openshift-pipelines-operator-rh\") | .name' /home/username/rhoc.json", "\"latest\" \"pipelines-1.11\" \"pipelines-1.12\" \"pipelines-1.13\" \"pipelines-1.14\"", "jq -s '.[] | select( .package == \"<package_name>\" ) | select( .schema == \"olm.channel\" ) | select( .name == \"<channel_name>\" ) | .entries | .[] | .name' /<path>/<catalog_name>.json", "jq -s '.[] | select( .package == \"openshift-pipelines-operator-rh\" ) | select( .schema == \"olm.channel\" ) | select( .name == \"latest\" ) | .entries | .[] | .name' /home/username/rhoc.json", "\"openshift-pipelines-operator-rh.v1.12.0\" \"openshift-pipelines-operator-rh.v1.12.1\" \"openshift-pipelines-operator-rh.v1.12.2\" \"openshift-pipelines-operator-rh.v1.13.0\" \"openshift-pipelines-operator-rh.v1.13.1\" \"openshift-pipelines-operator-rh.v1.11.1\" \"openshift-pipelines-operator-rh.v1.12.0\" \"openshift-pipelines-operator-rh.v1.12.1\" \"openshift-pipelines-operator-rh.v1.12.2\" \"openshift-pipelines-operator-rh.v1.13.0\" \"openshift-pipelines-operator-rh.v1.14.1\" \"openshift-pipelines-operator-rh.v1.14.2\" \"openshift-pipelines-operator-rh.v1.14.3\" \"openshift-pipelines-operator-rh.v1.14.4\"", "oc adm new-project <new_namespace>", "apiVersion: olm.operatorframework.io/v1alpha1 kind: ClusterExtension metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh installNamespace: <namespace> channel: <channel> version: \"<version>\"", "oc apply -f pipeline-operator.yaml", "clusterextension.olm.operatorframework.io/pipelines-operator created", "oc get clusterextension pipelines-operator -o yaml", "apiVersion: v1 items: - apiVersion: olm.operatorframework.io/v1alpha1 kind: ClusterExtension metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"olm.operatorframework.io/v1alpha1\",\"kind\":\"ClusterExtension\",\"metadata\":{\"annotations\":{},\"name\":\"pipelines-operator\"},\"spec\":{\"channel\":\"latest\",\"installNamespace\":\"openshift-operators\",\"packageName\":\"openshift-pipelines-operator-rh\",\"pollInterval\":\"30m\"}} creationTimestamp: \"2024-06-10T17:50:51Z\" generation: 1 name: pipelines-operator resourceVersion: \"53324\" uid: c54237be-cde4-46d4-9b31-d0ec6acc19bf spec: channel: latest installNamespace: openshift-operators packageName: openshift-pipelines-operator-rh upgradeConstraintPolicy: Enforce status: conditions: - lastTransitionTime: \"2024-06-10T17:50:58Z\" message: resolved to \"registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:dd3d18367da2be42539e5dde8e484dac3df33ba3ce1d5bcf896838954f3864ec\" observedGeneration: 1 reason: Success status: \"True\" type: Resolved - lastTransitionTime: \"2024-06-10T17:51:11Z\" message: installed from \"registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:dd3d18367da2be42539e5dde8e484dac3df33ba3ce1d5bcf896838954f3864ec\" observedGeneration: 1 reason: Success status: \"True\" type: Installed - lastTransitionTime: \"2024-06-10T17:50:58Z\" message: \"\" observedGeneration: 1 reason: Deprecated status: \"False\" type: Deprecated - lastTransitionTime: \"2024-06-10T17:50:58Z\" message: \"\" observedGeneration: 1 reason: Deprecated status: \"False\" type: PackageDeprecated - lastTransitionTime: \"2024-06-10T17:50:58Z\" message: \"\" observedGeneration: 1 reason: Deprecated status: \"False\" type: ChannelDeprecated - lastTransitionTime: \"2024-06-10T17:50:58Z\" message: \"\" observedGeneration: 1 reason: Deprecated status: \"False\" type: BundleDeprecated installedBundle: name: openshift-pipelines-operator-rh.v1.14.4 version: 1.14.4 resolvedBundle: name: openshift-pipelines-operator-rh.v1.14.4 version: 1.14.4 kind: List metadata: resourceVersion: \"\"", "oc get bundleDeployment pipelines-operator -o yaml", "apiVersion: core.rukpak.io/v1alpha2 kind: BundleDeployment metadata: creationTimestamp: \"2024-06-10T17:50:58Z\" finalizers: - core.rukpak.io/delete-cached-bundle generation: 1 name: pipelines-operator ownerReferences: - apiVersion: olm.operatorframework.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ClusterExtension name: pipelines-operator uid: c54237be-cde4-46d4-9b31-d0ec6acc19bf resourceVersion: \"53414\" uid: 74367cfc-578e-4da0-815f-fe40f3ca5d1c spec: installNamespace: openshift-operators provisionerClassName: core-rukpak-io-registry source: image: ref: registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:dd3d18367da2be42539e5dde8e484dac3df33ba3ce1d5bcf896838954f3864ec type: image status: conditions: - lastTransitionTime: \"2024-06-10T17:51:09Z\" message: Successfully unpacked the image Bundle reason: UnpackSuccessful status: \"True\" type: Unpacked - lastTransitionTime: \"2024-06-10T17:51:10Z\" message: Instantiated bundle pipelines-operator successfully reason: InstallationSucceeded status: \"True\" type: Installed - lastTransitionTime: \"2024-06-10T17:51:19Z\" message: BundleDeployment is healthy reason: Healthy status: \"True\" type: Healthy contentURL: https://core.openshift-rukpak.svc/bundles/pipelines-operator.tgz observedGeneration: 1 resolvedSource: image: ref: registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:dd3d18367da2be42539e5dde8e484dac3df33ba3ce1d5bcf896838954f3864ec type: image", "jq -s '.[] | select( .schema == \"olm.channel\" ) | select( .package == \"<package_name>\") | .name' /<path>/<catalog_name>.json", "jq -s '.[] | select( .schema == \"olm.channel\" ) | select( .package == \"openshift-pipelines-operator-rh\") | .name' /home/username/rhoc.json", "\"latest\" \"pipelines-1.11\" \"pipelines-1.12\" \"pipelines-1.13\" \"pipelines-1.14\"", "jq -s '.[] | select( .package == \"<package_name>\" ) | select( .schema == \"olm.channel\" ) | select( .name == \"<channel_name>\" ) | .entries | .[] | .name' /<path>/<catalog_name>.json", "jq -s '.[] | select( .package == \"openshift-pipelines-operator-rh\" ) | select( .schema == \"olm.channel\" ) | select( .name == \"latest\" ) | .entries | .[] | .name' /home/username/rhoc.json", "\"openshift-pipelines-operator-rh.v1.11.1\" \"openshift-pipelines-operator-rh.v1.12.0\" \"openshift-pipelines-operator-rh.v1.12.1\" \"openshift-pipelines-operator-rh.v1.12.2\" \"openshift-pipelines-operator-rh.v1.13.0\" \"openshift-pipelines-operator-rh.v1.14.1\" \"openshift-pipelines-operator-rh.v1.14.2\" \"openshift-pipelines-operator-rh.v1.14.3\" \"openshift-pipelines-operator-rh.v1.14.4\"", "oc get clusterextension <operator_name> -o yaml", "oc get clusterextension pipelines-operator -o yaml", "apiVersion: olm.operatorframework.io/v1alpha1 kind: ClusterExtension metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"olm.operatorframework.io/v1alpha1\",\"kind\":\"ClusterExtension\",\"metadata\":{\"annotations\":{},\"name\":\"pipelines-operator\"},\"spec\":{\"channel\":\"latest\",\"installNamespace\":\"openshift-operators\",\"packageName\":\"openshift-pipelines-operator-rh\",\"pollInterval\":\"30m\",\"version\":\"\\u003c1.12\"}} creationTimestamp: \"2024-06-11T15:55:37Z\" generation: 1 name: pipelines-operator resourceVersion: \"69776\" uid: 6a11dff3-bfa3-42b8-9e5f-d8babbd6486f spec: channel: latest installNamespace: openshift-operators packageName: openshift-pipelines-operator-rh upgradeConstraintPolicy: Enforce version: <1.12 status: conditions: - lastTransitionTime: \"2024-06-11T15:56:09Z\" message: installed from \"registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:e09d37bb1e754db42324fd18c1cb3e7ce77e7b7fcbf4932d0535391579938280\" observedGeneration: 1 reason: Success status: \"True\" type: Installed - lastTransitionTime: \"2024-06-11T15:55:50Z\" message: resolved to \"registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:e09d37bb1e754db42324fd18c1cb3e7ce77e7b7fcbf4932d0535391579938280\" observedGeneration: 1 reason: Success status: \"True\" type: Resolved - lastTransitionTime: \"2024-06-11T15:55:50Z\" message: \"\" observedGeneration: 1 reason: Deprecated status: \"False\" type: Deprecated - lastTransitionTime: \"2024-06-11T15:55:50Z\" message: \"\" observedGeneration: 1 reason: Deprecated status: \"False\" type: PackageDeprecated - lastTransitionTime: \"2024-06-11T15:55:50Z\" message: \"\" observedGeneration: 1 reason: Deprecated status: \"False\" type: ChannelDeprecated - lastTransitionTime: \"2024-06-11T15:55:50Z\" message: \"\" observedGeneration: 1 reason: Deprecated status: \"False\" type: BundleDeprecated installedBundle: name: openshift-pipelines-operator-rh.v1.11.1 version: 1.11.1 resolvedBundle: name: openshift-pipelines-operator-rh.v1.11.1 version: 1.11.1", "apiVersion: olm.operatorframework.io/v1alpha1 kind: ClusterExtension metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh installNamespace: <namespace> version: \"1.12.1\" 1", "apiVersion: olm.operatorframework.io/v1alpha1 kind: ClusterExtension metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh installNamespace: <namespace> version: \">1.11.1, <1.13\" 1", "apiVersion: olm.operatorframework.io/v1alpha1 kind: ClusterExtension metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh installNamespace: <namespace> channel: pipelines-1.13 1", "apiVersion: olm.operatorframework.io/v1alpha1 kind: ClusterExtension metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh installNamespace: <namespace> channel: latest version: \"<1.13\"", "oc apply -f pipelines-operator.yaml", "clusterextension.olm.operatorframework.io/pipelines-operator configured", "oc patch clusterextension/pipelines-operator -p '{\"spec\":{\"version\":\"<1.13\"}}' --type=merge", "clusterextension.olm.operatorframework.io/pipelines-operator patched", "oc get clusterextension pipelines-operator -o yaml", "apiVersion: olm.operatorframework.io/v1alpha1 kind: ClusterExtension metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"olm.operatorframework.io/v1alpha1\",\"kind\":\"ClusterExtension\",\"metadata\":{\"annotations\":{},\"name\":\"pipelines-operator\"},\"spec\":{\"channel\":\"latest\",\"installNamespace\":\"openshift-operators\",\"packageName\":\"openshift-pipelines-operator-rh\",\"pollInterval\":\"30m\",\"version\":\"\\u003c1.13\"}} creationTimestamp: \"2024-06-11T18:23:26Z\" generation: 2 name: pipelines-operator resourceVersion: \"66310\" uid: ce0416ba-13ea-4069-a6c8-e5efcbc47537 spec: channel: latest installNamespace: openshift-operators packageName: openshift-pipelines-operator-rh upgradeConstraintPolicy: Enforce version: <1.13 status: conditions: - lastTransitionTime: \"2024-06-11T18:23:33Z\" message: resolved to \"registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:814742c8a7cc7e2662598e114c35c13993a7b423cfe92548124e43ea5d469f82\" observedGeneration: 2 reason: Success status: \"True\" type: Resolved - lastTransitionTime: \"2024-06-11T18:23:52Z\" message: installed from \"registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:814742c8a7cc7e2662598e114c35c13993a7b423cfe92548124e43ea5d469f82\" observedGeneration: 2 reason: Success status: \"True\" type: Installed - lastTransitionTime: \"2024-06-11T18:23:33Z\" message: \"\" observedGeneration: 2 reason: Deprecated status: \"False\" type: Deprecated - lastTransitionTime: \"2024-06-11T18:23:33Z\" message: \"\" observedGeneration: 2 reason: Deprecated status: \"False\" type: PackageDeprecated - lastTransitionTime: \"2024-06-11T18:23:33Z\" message: \"\" observedGeneration: 2 reason: Deprecated status: \"False\" type: ChannelDeprecated - lastTransitionTime: \"2024-06-11T18:23:33Z\" message: \"\" observedGeneration: 2 reason: Deprecated status: \"False\" type: BundleDeprecated installedBundle: name: openshift-pipelines-operator-rh.v1.12.2 version: 1.12.2 resolvedBundle: name: openshift-pipelines-operator-rh.v1.12.2 version: 1.12.2", "oc get clusterextension <operator_name> -o yaml", "apiVersion: olm.operatorframework.io/v1alpha1 kind: ClusterExtension metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"olm.operatorframework.io/v1alpha1\",\"kind\":\"ClusterExtension\",\"metadata\":{\"annotations\":{},\"name\":\"pipelines-operator\"},\"spec\":{\"channel\":\"latest\",\"installNamespace\":\"openshift-operators\",\"packageName\":\"openshift-pipelines-operator-rh\",\"pollInterval\":\"30m\",\"version\":\"3.0\"}} creationTimestamp: \"2024-06-11T18:23:26Z\" generation: 3 name: pipelines-operator resourceVersion: \"71852\" uid: ce0416ba-13ea-4069-a6c8-e5efcbc47537 spec: channel: latest installNamespace: openshift-operators packageName: openshift-pipelines-operator-rh upgradeConstraintPolicy: Enforce version: \"3.0\" status: conditions: - lastTransitionTime: \"2024-06-11T18:29:02Z\" message: 'error upgrading from currently installed version \"1.12.2\": no package \"openshift-pipelines-operator-rh\" matching version \"3.0\" found in channel \"latest\"' observedGeneration: 3 reason: ResolutionFailed status: \"False\" type: Resolved - lastTransitionTime: \"2024-06-11T18:29:02Z\" message: installation has not been attempted as resolution failed observedGeneration: 3 reason: InstallationStatusUnknown status: Unknown type: Installed - lastTransitionTime: \"2024-06-11T18:29:02Z\" message: deprecation checks have not been attempted as resolution failed observedGeneration: 3 reason: Deprecated status: Unknown type: Deprecated - lastTransitionTime: \"2024-06-11T18:29:02Z\" message: deprecation checks have not been attempted as resolution failed observedGeneration: 3 reason: Deprecated status: Unknown type: PackageDeprecated - lastTransitionTime: \"2024-06-11T18:29:02Z\" message: deprecation checks have not been attempted as resolution failed observedGeneration: 3 reason: Deprecated status: Unknown type: ChannelDeprecated - lastTransitionTime: \"2024-06-11T18:29:02Z\" message: deprecation checks have not been attempted as resolution failed observedGeneration: 3 reason: Deprecated status: Unknown type: BundleDeprecated", "- name: example.v3.0.0 skips: [\"example.v2.0.0\"] - name: example.v2.0.0 skipRange: >=1.0.0 <2.0.0", "apiVersion: olm.operatorframework.io/v1alpha1 kind: ClusterExtension metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh installNamespace: <namespace_name> version: \">=1.11, <1.13\"", "apiVersion: olm.operatorframework.io/v1alpha1 kind: ClusterExtension metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh installNamespace: <namespace_name> channel: latest 1", "apiVersion: olm.operatorframework.io/v1alpha1 kind: ClusterExtension metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh installNamespace: <namespace_name> version: \"1.11.1\" 1", "apiVersion: olm.operatorframework.io/v1alpha1 kind: ClusterExtension metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh installNamespace: <namespace_name> version: \">1.11.1\" 1", "oc apply -f <extension_name>.yaml", "apiVersion: olm.operatorframework.io/v1alpha1 kind: Operator metadata: name: <operator_name> 1 spec: packageName: <package_name> 2 installNamespace: <namespace_name> version: <version> 3 upgradeConstraintPolicy: Ignore 4", "oc apply -f <extension_name>.yaml", "oc delete clusterextension <operator_name>", "clusterextension.olm.operatorframework.io \"<operator_name>\" deleted", "oc get clusterextensions", "No resources found", "oc get ns <operator_name>-system", "Error from server (NotFound): namespaces \"<operator_name>-system\" not found", "oc delete catalog <catalog_name>", "catalog.catalogd.operatorframework.io \"my-catalog\" deleted", "oc get catalog" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html-single/operators/index
Chapter 19. Installing on any platform
Chapter 19. Installing on any platform 19.1. Installing a cluster on any platform In OpenShift Container Platform version 4.9, you can install a cluster on any infrastructure that you provision, including virtualization and cloud environments. Important Review the information in the guidelines for deploying OpenShift Container Platform on non-tested platforms before you attempt to install an OpenShift Container Platform cluster in virtualized or cloud environments. 19.1.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . If you use a firewall, you configured it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 19.1.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.9, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 19.1.3. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 19.1.3.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 19.1. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To maintain high availability of your cluster, use separate physical hosts for these cluster machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS), Red Hat Enterprise Linux (RHEL) 7.9, or RHEL 8.4. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 8 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 19.1.3.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 19.2. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage IOPS [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 7.9, or RHEL 8.4 [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or hyperthreading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and planned for removal in a future release of OpenShift Container Platform 4. 19.1.3.3. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 19.1.3.4. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. During the initial boot, the machines require an IP address configuration that is set either through a DHCP server or statically by providing the required boot options. After a network connection is established, the machines download their Ignition config files from an HTTP or HTTPS server. The Ignition config files are then used to set the exact state of each machine. The Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation. It is recommended to use a DHCP server for long-term management of the cluster machines. Ensure that the DHCP server is configured to provide persistent IP addresses, DNS server information, and hostnames to the cluster machines. Note If a DHCP service is not available for your user-provisioned infrastructure, you can instead provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests. 19.1.3.4.1. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 19.1.3.4.2. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Important In connected OpenShift Container Platform environments, all nodes are required to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Table 19.3. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 VXLAN and Geneve 6081 VXLAN and Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 19.4. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 19.5. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports NTP configuration for user-provisioned infrastructure OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service . If a DHCP server provides NTP server information, the chrony time service on the Red Hat Enterprise Linux CoreOS (RHCOS) machines read the information and can sync the clock with the NTP servers. Additional resources Configuring chrony time service 19.1.3.5. User-provisioned DNS requirements In OpenShift Container Platform deployments, DNS name resolution is required for the following components: The Kubernetes API The OpenShift Container Platform application wildcard The bootstrap, control plane, and compute machines Reverse DNS resolution is also required for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are provided by DHCP. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate. Note It is recommended to use a DHCP server to provide the hostnames to each cluster node. See the DHCP recommendations for user-provisioned infrastructure section for more information. The following DNS records are required for a user-provisioned OpenShift Container Platform cluster and they must be in place before installation. In each record, <cluster_name> is the cluster name and <base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 19.6. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. api-int.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to internally identify the API load balancer. These records must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the hostnames that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods. Routes *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that refers to the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example, console-openshift-console.apps.<cluster_name>.<base_domain> is used as a wildcard route to the OpenShift Container Platform console. Bootstrap machine bootstrap.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the bootstrap machine. These records must be resolvable by the nodes within the cluster. Control plane machines <master><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes. These records must be resolvable by the nodes within the cluster. Compute machines <worker><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. Note In OpenShift Container Platform 4.4 and later, you do not need to specify etcd host and SRV records in your DNS configuration. Tip You can use the dig command to verify name and reverse name resolution. See the section on Validating DNS resolution for user-provisioned infrastructure for detailed validation steps. 19.1.3.5.1. Example DNS configuration for user-provisioned clusters This section provides A and PTR record configuration samples that meet the DNS requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. The samples are not meant to provide advice for choosing one DNS solution over another. In the examples, the cluster name is ocp4 and the base domain is example.com . Example DNS A record configuration for a user-provisioned cluster The following example is a BIND zone file that shows sample A records for name resolution in a user-provisioned cluster. Example 19.1. Sample DNS zone database USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; master0.ocp4.example.com. IN A 192.168.1.97 5 master1.ocp4.example.com. IN A 192.168.1.98 6 master2.ocp4.example.com. IN A 192.168.1.99 7 ; worker0.ocp4.example.com. IN A 192.168.1.11 8 worker1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF 1 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer. 2 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications. 3 Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. 4 Provides name resolution for the bootstrap machine. 5 6 7 Provides name resolution for the control plane machines. 8 9 Provides name resolution for the compute machines. Example DNS PTR record configuration for a user-provisioned cluster The following example BIND zone file shows sample PTR records for reverse name resolution in a user-provisioned cluster. Example 19.2. Sample DNS zone database for reverse records USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR master0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR master1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR master2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR worker0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR worker1.ocp4.example.com. 8 ; ;EOF 1 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer. 2 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications. 3 Provides reverse DNS resolution for the bootstrap machine. 4 5 6 Provides reverse DNS resolution for the control plane machines. 7 8 Provides reverse DNS resolution for the compute machines. Note A PTR record is not required for the OpenShift Container Platform application wildcard. 19.1.3.6. Load balancing requirements for user-provisioned infrastructure Before you install OpenShift Container Platform, you must provision the API and application ingress load balancing infrastructure. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you want to deploy the API and application ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI) for the API routes. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Note Session persistence is not required for the API load balancer to function properly. Configure the following ports on both the front and back of the load balancers: Table 19.7. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI) for the ingress routes. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 19.8. Application ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic 1936 The worker nodes that run the Ingress Controller pods, by default. You must configure the /healthz/ready endpoint for the ingress health check probe. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Note A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. You must configure the Ingress router after the control plane initializes. 19.1.3.6.1. Example load balancer configuration for user-provisioned clusters This section provides an example API and application ingress load balancer configuration that meets the load balancing requirements for user-provisioned clusters. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Example 19.3. Sample API and application ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 frontend stats bind *:1936 mode http log global maxconn 10 stats enable stats hide-version stats refresh 30s stats show-node stats show-desc Stats for ocp4 cluster 1 stats auth admin:ocp4 stats uri /stats listen api-server-6443 2 bind *:6443 mode tcp server bootstrap bootstrap.ocp4.example.com:6443 check inter 1s backup 3 server master0 master0.ocp4.example.com:6443 check inter 1s server master1 master1.ocp4.example.com:6443 check inter 1s server master2 master2.ocp4.example.com:6443 check inter 1s listen machine-config-server-22623 4 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 5 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 6 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 7 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s 1 In the example, the cluster name is ocp4 . 2 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 3 5 The bootstrap entries must be in place before the OpenShift Container Platform cluster installation and they must be removed after the bootstrap process is complete. 4 Port 22623 handles the machine config server traffic and points to the control plane machines. 6 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 7 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . 19.1.4. Preparing the user-provisioned infrastructure Before you install OpenShift Container Platform on user-provisioned infrastructure, you must prepare the underlying infrastructure. This section provides details about the high-level steps required to set up your cluster infrastructure in preparation for an OpenShift Container Platform installation. This includes configuring IP networking and network connectivity for your cluster nodes, enabling the required ports through your firewall, and setting up the required DNS and load balancing infrastructure. After preparation, your cluster infrastructure must meet the requirements outlined in the Requirements for a cluster with user-provisioned infrastructure section. Prerequisites You have reviewed the OpenShift Container Platform 4.x Tested Integrations page. You have reviewed the infrastructure requirements detailed in the Requirements for a cluster with user-provisioned infrastructure section. Procedure If you are using DHCP to provide the IP networking configuration to your cluster nodes, configure your DHCP service. Add persistent IP addresses for the nodes to your DHCP server configuration. In your configuration, match the MAC address of the relevant network interface to the intended IP address for each node. When you use DHCP to configure IP addressing for the cluster machines, the machines also obtain the DNS server information through DHCP. Define the persistent DNS server address that is used by the cluster nodes through your DHCP server configuration. Note If you are not using a DHCP service, you must provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. Define the hostnames of your cluster nodes in your DHCP server configuration. See the Setting the cluster node hostnames through DHCP section for details about hostname considerations. Note If you are not using a DHCP service, the cluster nodes obtain their hostname through a reverse DNS lookup. Ensure that your network infrastructure provides the required network connectivity between the cluster components. See the Networking requirements for user-provisioned infrastructure section for details about the requirements. Configure your firewall to enable the ports required for the OpenShift Container Platform cluster components to communicate. See Networking requirements for user-provisioned infrastructure section for details about the ports that are required. Setup the required DNS infrastructure for your cluster. Configure DNS name resolution for the Kubernetes API, the application wildcard, the bootstrap machine, the control plane machines, and the compute machines. Configure reverse DNS resolution for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. See the User-provisioned DNS requirements section for more information about the OpenShift Container Platform DNS requirements. Validate your DNS configuration. From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses in the responses correspond to the correct components. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names in the responses correspond to the correct components. See the Validating DNS resolution for user-provisioned infrastructure section for detailed DNS validation steps. Provision the required API and application ingress load balancing infrastructure. See the Load balancing requirements for user-provisioned infrastructure section for more information about the requirements. Note Some load balancing solutions require the DNS name resolution for the cluster nodes to be in place before the load balancing is initialized. 19.1.5. Validating DNS resolution for user-provisioned infrastructure You can validate your DNS configuration before installing OpenShift Container Platform on user-provisioned infrastructure. Important The validation steps detailed in this section must succeed before you install your cluster. Prerequisites You have configured the required DNS records for your user-provisioned infrastructure. Procedure From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses contained in the responses correspond to the correct components. Perform a lookup against the Kubernetes API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1 1 Replace <nameserver_ip> with the IP address of the nameserver, <cluster_name> with your cluster name, and <base_domain> with your base domain name. Example output api.ocp4.example.com. 0 IN A 192.168.1.5 Perform a lookup against the Kubernetes internal API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain> Example output api-int.ocp4.example.com. 0 IN A 192.168.1.5 Test an example *.apps.<cluster_name>.<base_domain> DNS wildcard lookup. All of the application wildcard lookups must resolve to the IP address of the application ingress load balancer: USD dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain> Example output random.apps.ocp4.example.com. 0 IN A 192.168.1.5 Note In the example outputs, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. You can replace random with another wildcard value. For example, you can query the route to the OpenShift Container Platform console: USD dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain> Example output console-openshift-console.apps.ocp4.example.com. 0 IN A 192.168.1.5 Run a lookup against the bootstrap DNS record name. Check that the result points to the IP address of the bootstrap node: USD dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain> Example output bootstrap.ocp4.example.com. 0 IN A 192.168.1.96 Use this method to perform lookups against the DNS record names for the control plane and compute nodes. Check that the results correspond to the IP addresses of each node. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names contained in the responses correspond to the correct components. Perform a reverse lookup against the IP address of the API load balancer. Check that the response includes the record names for the Kubernetes API and the Kubernetes internal API: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.5 Example output 5.1.168.192.in-addr.arpa. 0 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 0 IN PTR api.ocp4.example.com. 2 1 Provides the record name for the Kubernetes internal API. 2 Provides the record name for the Kubernetes API. Note A PTR record is not required for the OpenShift Container Platform application wildcard. No validation step is needed for reverse DNS resolution against the IP address of the application ingress load balancer. Perform a reverse lookup against the IP address of the bootstrap node. Check that the result points to the DNS record name of the bootstrap node: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.96 Example output 96.1.168.192.in-addr.arpa. 0 IN PTR bootstrap.ocp4.example.com. Use this method to perform reverse lookups against the IP addresses for the control plane and compute nodes. Check that the results correspond to the DNS record names of each node. 19.1.6. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. If you install a cluster on infrastructure that you provision, you must provide the key to the installation program. 19.1.7. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on a local computer. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 19.1.8. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.9. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.9 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.9 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.9 MacOSX Client entry and save the file. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 19.1.9. Manually creating the installation configuration file For user-provisioned installations of OpenShift Container Platform, you manually generate your installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Note For some platform types, you can alternatively run ./openshift-install create install-config --dir <installation_directory> to generate an install-config.yaml file. You can provide details about your cluster configuration at the prompts. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. 19.1.9.1. Sample install-config.yaml file for other platforms You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OpenShiftSDN serviceNetwork: 11 - 172.30.0.0/16 platform: none: {} 12 fips: false 13 pullSecret: '{"auths": ...}' 14 sshKey: 'ssh-ed25519 AAAA...' 15 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 6 Specifies whether to enable or disable simultaneous multithreading (SMT), or hyperthreading. By default, SMT is enabled to increase the performance of the cores in your machines. You can disable it by setting the parameter value to Disabled . If you disable SMT, you must disable it in all cluster machines; this includes both control plane and compute machines. Note Simultaneous multithreading (SMT) is enabled by default. If SMT is not enabled in your BIOS settings, the hyperthreading parameter has no effect. Important If you disable hyperthreading , whether in the BIOS or in the install-config.yaml file, ensure that your capacity planning accounts for the dramatically decreased machine performance. 4 You must set this value to 0 when you install OpenShift Container Platform on user-provisioned infrastructure. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. In user-provisioned installations, you must manually deploy the compute machines before you finish installing the cluster. Note If you are installing a three-node cluster, do not deploy any compute machines when you install the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 7 The number of control plane machines that you add to the cluster. Because the cluster uses these values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 8 The cluster name that you specified in your DNS records. 9 A block of IP addresses from which pod IP addresses are allocated. This block must not overlap with existing physical networks. These IP addresses are used for the pod network. If you need to access the pods from an external network, you must configure load balancers and routers to manage the traffic. Note Class E CIDR range is reserved for a future use. To use the Class E CIDR range, you must ensure your networking environment accepts the IP addresses within the Class E CIDR range. 10 The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 , then each node is assigned a /23 subnet out of the given cidr , which allows for 510 (2^(32 - 23) - 2) pod IP addresses. If you are required to provide access to nodes from an external network, configure load balancers and routers to manage the traffic. 11 The IP address pool to use for service IP addresses. You can enter only one IP address pool. This block must not overlap with existing physical networks. If you need to access the services from an external network, configure load balancers and routers to manage the traffic. 12 You must set the platform to none . You cannot provide additional platform configuration variables for your platform. Important Clusters that are installed with the platform type none are unable to use some features, such as managing compute machines with the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that would normally support the feature. This parameter cannot be changed after installation. 13 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. 14 The pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 15 The SSH public key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 19.1.9.2. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ... 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace to hold the additional CA certificates. If you provide additionalTrustBundle and at least one proxy setting, the Proxy object is configured to reference the user-ca-bundle config map in the trustedCA field. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges the contents specified for the trustedCA parameter with the RHCOS trust bundle. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 19.1.9.3. Configuring a three-node cluster You can optionally deploy zero compute machines in a bare metal cluster that consists of three control plane machines only. This provides smaller, more resource efficient clusters for cluster administrators and developers to use for testing, development, and production. In three-node OpenShift Container Platform environments, the three control plane machines are schedulable, which means that your application workloads are scheduled to run on them. Prerequisites You have an existing install-config.yaml file. Procedure Ensure that the number of compute replicas is set to 0 in your install-config.yaml file, as shown in the following compute stanza: compute: - name: worker platform: {} replicas: 0 Note You must set the value of the replicas parameter for the compute machines to 0 when you install OpenShift Container Platform on user-provisioned infrastructure, regardless of the number of compute machines you are deploying. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. This does not apply to user-provisioned installations, where the compute machines are deployed manually. For three-node cluster installations, follow these steps: If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. See the Load balancing requirements for user-provisioned infrastructure section for more information. When you create the Kubernetes manifest files in the following procedure, ensure that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml file is set to true . This enables your application workloads to run on the control plane nodes. Do not deploy any compute nodes when you create the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 19.1.10. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Warning If you are installing a three-node cluster, skip the following step to allow the control plane nodes to be schedulable. Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become worker nodes. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: 19.1.11. Installing RHCOS and starting the OpenShift Container Platform bootstrap process To install OpenShift Container Platform on bare metal infrastructure that you provision, you must install Red Hat Enterprise Linux CoreOS (RHCOS) on the machines. When you install RHCOS, you must provide the Ignition config file that was generated by the OpenShift Container Platform installation program for the type of machine you are installing. If you have configured suitable networking, DNS, and load balancing infrastructure, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS machines have rebooted. To install RHCOS on the machines, follow either the steps to use an ISO image or network PXE booting. Note The compute node deployment steps included in this installation document are RHCOS-specific. If you choose instead to deploy RHEL-based compute nodes, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and planned for removal in a future release of OpenShift Container Platform 4. You can configure RHCOS during ISO and PXE installations by using the following methods: Kernel arguments: You can use kernel arguments to provide installation-specific information. For example, you can specify the locations of the RHCOS installation files that you uploaded to your HTTP server and the location of the Ignition config file for the type of node you are installing. For a PXE installation, you can use the APPEND parameter to pass the arguments to the kernel of the live installer. For an ISO installation, you can interrupt the live installation boot process to add the kernel arguments. In both installation cases, you can use special coreos.inst.* arguments to direct the live installer, as well as standard installation boot arguments for turning standard kernel services on or off. Ignition configs: OpenShift Container Platform Ignition config files ( *.ign ) are specific to the type of node you are installing. You pass the location of a bootstrap, control plane, or compute node Ignition config file during the RHCOS installation so that it takes effect on first boot. In special cases, you can create a separate, limited Ignition config to pass to the live system. That Ignition config could do a certain set of tasks, such as reporting success to a provisioning system after completing installation. This special Ignition config is consumed by the coreos-installer to be applied on first boot of the installed system. Do not provide the standard control plane and compute node Ignition configs to the live ISO directly. coreos-installer : You can boot the live ISO installer to a shell prompt, which allows you to prepare the permanent system in a variety of ways before first boot. In particular, you can run the coreos-installer command to identify various artifacts to include, work with disk partitions, and set up networking. In some cases, you can configure features on the live system and copy them to the installed system. Whether to use an ISO or PXE install depends on your situation. A PXE install requires an available DHCP service and more preparation, but can make the installation process more automated. An ISO install is a more manual process and can be inconvenient if you are setting up more than a few machines. Note As of OpenShift Container Platform 4.6, the RHCOS ISO and other installation artifacts provide support for installation on disks with 4K sectors. 19.1.11.1. Installing RHCOS by using an ISO image You can use an ISO image to install RHCOS on the machines. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have an HTTP server that can be accessed from your computer, and from the machines that you create. You have reviewed the Advanced RHCOS installation configuration section for different ways to configure features, such as networking and disk partitioning. Procedure Obtain the SHA512 digest for each of your Ignition config files. For example, you can use the following on a system running Linux to get the SHA512 digest for your bootstrap.ign Ignition config file: USD sha512sum <installation_directory>/bootstrap.ign The digests are provided to the coreos-installer in a later step to validate the authenticity of the Ignition config files on the cluster nodes. Upload the bootstrap, control plane, and compute node Ignition config files that the installation program created to your HTTP server. Note the URLs of these files. Important You can add or change configuration settings in your Ignition configs before saving them to your HTTP server. If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. From the installation host, validate that the Ignition config files are available on the URLs. The following example gets the Ignition config file for the bootstrap node: USD curl -k http://<HTTP_server>/bootstrap.ign 1 Example output % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{"ignition":{"version":"3.2.0"},"passwd":{"users":[{"name":"core","sshAuthorizedKeys":["ssh-rsa... Replace bootstrap.ign with master.ign or worker.ign in the command to validate that the Ignition config files for the control plane and compute nodes are also available. Although it is possible to obtain the RHCOS images that are required for your preferred method of installing operating system instances from the RHCOS image mirror page, the recommended way to obtain the correct version of your RHCOS images are from the output of openshift-install command: USD openshift-install coreos print-stream-json | grep '\.iso[^.]' Example output "location": "<url>/art/storage/releases/rhcos-4.9-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso", "location": "<url>/art/storage/releases/rhcos-4.9-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso", "location": "<url>/art/storage/releases/rhcos-4.9-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso", "location": "<url>/art/storage/releases/rhcos-4.9/<release>/x86_64/rhcos-<release>-live.x86_64.iso", Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image versions that match your OpenShift Container Platform version if they are available. Use only ISO images for this procedure. RHCOS qcow2 images are not supported for this installation type. ISO file names resemble the following example: rhcos-<version>-live.<architecture>.iso Use the ISO to start the RHCOS installation. Use one of the following installation options: Burn the ISO image to a disk and boot it directly. Use ISO redirection by using a lights-out management (LOM) interface. Boot the RHCOS ISO image without specifying any options or interrupting the live boot sequence. Wait for the installer to boot into a shell prompt in the RHCOS live environment. Note It is possible to interrupt the RHCOS installation boot process to add kernel arguments. However, for this ISO procedure you should use the coreos-installer command as outlined in the following steps, instead of adding kernel arguments. Run the coreos-installer command and specify the options that meet your installation requirements. At a minimum, you must specify the URL that points to the Ignition config file for the node type, and the device that you are installing to: USD sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2 1 1 You must run the coreos-installer command by using sudo , because the core user does not have the required root privileges to perform the installation. 2 The --ignition-hash option is required when the Ignition config file is obtained through an HTTP URL to validate the authenticity of the Ignition config file on the cluster node. <digest> is the Ignition config file SHA512 digest obtained in a preceding step. Note If you want to provide your Ignition config files through an HTTPS server that uses TLS, you can add the internal certificate authority (CA) to the system trust store before running coreos-installer . The following example initializes a bootstrap node installation to the /dev/sda device. The Ignition config file for the bootstrap node is obtained from an HTTP web server with the IP address 192.168.1.2: USD sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b Monitor the progress of the RHCOS installation on the console of the machine. Important Be sure that the installation is successful on each node before commencing with the OpenShift Container Platform installation. Observing the installation process can also help to determine the cause of RHCOS installation issues that might arise. After RHCOS installs, you must reboot the system. During the system reboot, it applies the Ignition config file that you specified. Continue to create the other machines for your cluster. Important You must create the bootstrap and control plane machines at this time. If the control plane machines are not made schedulable, also create at least two compute machines before you install OpenShift Container Platform. If the required network, DNS, and load balancer infrastructure are in place, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS nodes have rebooted. Note RHCOS nodes do not include a default password for the core user. You can access the nodes by running ssh core@<node>.<cluster_name>.<base_domain> as a user with access to the SSH private key that is paired to the public key that you specified in your install_config.yaml file. OpenShift Container Platform 4 cluster nodes running RHCOS are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, when investigating installation issues, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on a target node, SSH access might be required for debugging or disaster recovery. 19.1.11.2. Installing RHCOS by using PXE or iPXE booting You can use PXE or iPXE booting to install RHCOS on the machines. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have configured suitable PXE or iPXE infrastructure. You have an HTTP server that can be accessed from your computer, and from the machines that you create. You have reviewed the Advanced RHCOS installation configuration section for different ways to configure features, such as networking and disk partitioning. Procedure Upload the bootstrap, control plane, and compute node Ignition config files that the installation program created to your HTTP server. Note the URLs of these files. Important You can add or change configuration settings in your Ignition configs before saving them to your HTTP server. If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. From the installation host, validate that the Ignition config files are available on the URLs. The following example gets the Ignition config file for the bootstrap node: USD curl -k http://<HTTP_server>/bootstrap.ign 1 Example output % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{"ignition":{"version":"3.2.0"},"passwd":{"users":[{"name":"core","sshAuthorizedKeys":["ssh-rsa... Replace bootstrap.ign with master.ign or worker.ign in the command to validate that the Ignition config files for the control plane and compute nodes are also available. Although it is possible to obtain the RHCOS kernel , initramfs and rootfs files that are required for your preferred method of installing operating system instances from the RHCOS image mirror page, the recommended way to obtain the correct version of your RHCOS files are from the output of openshift-install command: USD openshift-install coreos print-stream-json | grep -Eo '"https.*(kernel-|initramfs.|rootfs.)\w+(\.img)?"' Example output "<url>/art/storage/releases/rhcos-4.9-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64" "<url>/art/storage/releases/rhcos-4.9-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img" "<url>/art/storage/releases/rhcos-4.9-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img" "<url>/art/storage/releases/rhcos-4.9-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le" "<url>/art/storage/releases/rhcos-4.9-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img" "<url>/art/storage/releases/rhcos-4.9-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img" "<url>/art/storage/releases/rhcos-4.9-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x" "<url>/art/storage/releases/rhcos-4.9-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img" "<url>/art/storage/releases/rhcos-4.9-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img" "<url>/art/storage/releases/rhcos-4.9/<release>/x86_64/rhcos-<release>-live-kernel-x86_64" "<url>/art/storage/releases/rhcos-4.9/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img" "<url>/art/storage/releases/rhcos-4.9/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img" Important The RHCOS artifacts might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Only use the appropriate kernel , initramfs , and rootfs artifacts described below for this procedure. RHCOS QCOW2 images are not supported for this installation type. The file names contain the OpenShift Container Platform version number. They resemble the following examples: kernel : rhcos-<version>-live-kernel-<architecture> initramfs : rhcos-<version>-live-initramfs.<architecture>.img rootfs : rhcos-<version>-live-rootfs.<architecture>.img Upload the rootfs , kernel , and initramfs files to your HTTP server. Important If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. Configure the network boot infrastructure so that the machines boot from their local disks after RHCOS is installed on them. Configure PXE or iPXE installation for the RHCOS images and begin the installation. Modify one of the following example menu entries for your environment and verify that the image and Ignition files are properly accessible: For PXE: 1 1 Specify the location of the live kernel file that you uploaded to your HTTP server. The URL must be HTTP, TFTP, or FTP; HTTPS and NFS are not supported. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the locations of the RHCOS files that you uploaded to your HTTP server. The initrd parameter value is the location of the initramfs file, the coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the bootstrap Ignition config file. You can also add more kernel arguments to the APPEND line to configure networking or other boot options. Note This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more console= arguments to the APPEND line. For example, add console=tty0 console=ttyS0 to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? . For iPXE: 1 Specify the locations of the RHCOS files that you uploaded to your HTTP server. The kernel parameter value is the location of the kernel file, the initrd=main argument is needed for booting on UEFI systems, the coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the bootstrap Ignition config file. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the location of the initramfs file that you uploaded to your HTTP server. Note This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more console= arguments to the kernel line. For example, add console=tty0 console=ttyS0 to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? . Monitor the progress of the RHCOS installation on the console of the machine. Important Be sure that the installation is successful on each node before commencing with the OpenShift Container Platform installation. Observing the installation process can also help to determine the cause of RHCOS installation issues that might arise. After RHCOS installs, the system reboots. During reboot, the system applies the Ignition config file that you specified. Continue to create the machines for your cluster. Important You must create the bootstrap and control plane machines at this time. If the control plane machines are not made schedulable, also create at least two compute machines before you install the cluster. If the required network, DNS, and load balancer infrastructure are in place, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS nodes have rebooted. Note RHCOS nodes do not include a default password for the core user. You can access the nodes by running ssh core@<node>.<cluster_name>.<base_domain> as a user with access to the SSH private key that is paired to the public key that you specified in your install_config.yaml file. OpenShift Container Platform 4 cluster nodes running RHCOS are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, when investigating installation issues, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on a target node, SSH access might be required for debugging or disaster recovery. 19.1.11.3. Advanced RHCOS installation configuration A key benefit for manually provisioning the Red Hat Enterprise Linux CoreOS (RHCOS) nodes for OpenShift Container Platform is to be able to do configuration that is not available through default OpenShift Container Platform installation methods. This section describes some of the configurations that you can do using techniques that include: Passing kernel arguments to the live installer Running coreos-installer manually from the live system Embedding Ignition configs in an ISO The advanced configuration topics for manual Red Hat Enterprise Linux CoreOS (RHCOS) installations detailed in this section relate to disk partitioning, networking, and using Ignition configs in different ways. 19.1.11.3.1. Using advanced networking options for PXE and ISO installations Networking for OpenShift Container Platform nodes uses DHCP by default to gather all necessary configuration settings. To set up static IP addresses or configure special settings, such as bonding, you can do one of the following: Pass special kernel parameters when you boot the live installer. Use a machine config to copy networking files to the installed system. Configure networking from a live installer shell prompt, then copy those settings to the installed system so that they take effect when the installed system first boots. To configure a PXE or iPXE installation, use one of the following options: See the "Advanced RHCOS installation reference" tables. Use a machine config to copy networking files to the installed system. To configure an ISO installation, use the following procedure. Procedure Boot the ISO installer. From the live system shell prompt, configure networking for the live system using available RHEL tools, such as nmcli or nmtui . Run the coreos-installer command to install the system, adding the --copy-network option to copy networking configuration. For example: USD sudo coreos-installer install --copy-network \ --ignition-url=http://host/worker.ign /dev/sda Important The --copy-network option only copies networking configuration found under /etc/NetworkManager/system-connections . In particular, it does not copy the system hostname. Reboot into the installed system. Additional resources See Getting started with nmcli and Getting started with nmtui in the RHEL 8 documentation for more information about the nmcli and nmtui tools. 19.1.11.3.2. Disk partitioning The disk partitions are created on OpenShift Container Platform cluster nodes during the Red Hat Enterprise Linux CoreOS (RHCOS) installation. Each RHCOS node of a particular architecture uses the same partition layout, unless the default partitioning configuration is overridden. During the RHCOS installation, the size of the root file system is increased to use the remaining available space on the target device. There are two cases where you might want to override the default partitioning when installing RHCOS on an OpenShift Container Platform cluster node: Creating separate partitions: For greenfield installations on an empty disk, you might want to add separate storage to a partition. This is officially supported for mounting /var or a subdirectory of /var , such as /var/lib/etcd , on a separate partition, but not both. Important For disk sizes larger than 100GB, and especially disk sizes larger than 1TB, create a separate /var partition. See "Creating a separate /var partition" and this Red Hat Knowledgebase article for more information. Important Kubernetes supports only two file system partitions. If you add more than one partition to the original configuration, Kubernetes cannot monitor all of them. Retaining existing partitions: For a brownfield installation where you are reinstalling OpenShift Container Platform on an existing node and want to retain data partitions installed from your operating system, there are both boot arguments and options to coreos-installer that allow you to retain existing data partitions. Warning The use of custom partitions could result in those partitions not being monitored by OpenShift Container Platform or alerted on. If you are overriding the default partitioning, see Understanding OpenShift File System Monitoring (eviction conditions) for more information about how OpenShift Container Platform monitors your host file systems. 19.1.11.3.2.1. Creating a separate /var partition In general, you should use the default disk partitioning that is created during the RHCOS installation. However, there are cases where you might want to create a separate partition for a directory that you expect to grow. OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var directory or a subdirectory of /var . For example: /var/lib/containers : Holds container-related content that can grow as more images and containers are added to a system. /var/lib/etcd : Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. /var : Holds data that you might want to keep separate for purposes such as auditing. Important For disk sizes larger than 100GB, and especially larger than 1TB, create a separate /var partition. Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems. The use of a separate partition for the /var directory or a subdirectory of /var also prevents data growth in the partitioned directory from filling up the root file system. The following procedure sets up a separate /var partition by adding a machine config manifest that is wrapped into the Ignition config file for a node type during the preparation phase of an installation. Procedure On your installation host, change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD openshift-install create manifests --dir <installation_directory> Create a Butane config that configures the additional partition. For example, name the file USDHOME/clusterconfig/98-var-partition.bu , change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition: variant: openshift version: 4.9.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true 1 The storage device name of the disk that you want to partition. 2 When adding a data partition to the boot disk, a minimum offset value of 25000 mebibytes is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no offset value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 3 The size of the data partition in mebibytes. 4 The prjquota mount option must be enabled for filesystems used for container storage. Note When creating a separate /var partition, you cannot use different instance types for compute nodes, if the different instance types do not have the same device name. Create a manifest from the Butane config and save it to the clusterconfig/openshift directory. For example, run the following command: USD butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml Create the Ignition config files: USD openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory: The files in the <installation_directory>/manifest and <installation_directory>/openshift directories are wrapped into the Ignition config files, including the file that contains the 98-var-partition custom MachineConfig object. steps You can apply the custom disk partitioning by referencing the Ignition config files during the RHCOS installations. 19.1.11.3.2.2. Retaining existing partitions For an ISO installation, you can add options to the coreos-installer command that cause the installer to maintain one or more existing partitions. For a PXE installation, you can add coreos.inst.* options to the APPEND parameter to preserve partitions. Saved partitions might be data partitions from an existing OpenShift Container Platform system. You can identify the disk partitions you want to keep either by partition label or by number. Note If you save existing partitions, and those partitions do not leave enough space for RHCOS, the installation will fail without damaging the saved partitions. Retaining existing partitions during an ISO installation This example preserves any partition in which the partition label begins with data ( data* ): # coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign \ --save-partlabel 'data*' /dev/sda The following example illustrates running the coreos-installer in a way that preserves the sixth (6) partition on the disk: # coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign \ --save-partindex 6 /dev/sda This example preserves partitions 5 and higher: # coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 5- /dev/sda In the examples where partition saving is used, coreos-installer recreates the partition immediately. Retaining existing partitions during a PXE installation This APPEND option preserves any partition in which the partition label begins with 'data' ('data*'): coreos.inst.save_partlabel=data* This APPEND option preserves partitions 5 and higher: coreos.inst.save_partindex=5- This APPEND option preserves partition 6: coreos.inst.save_partindex=6 19.1.11.3.3. Identifying Ignition configs When doing an RHCOS manual installation, there are two types of Ignition configs that you can provide, with different reasons for providing each one: Permanent install Ignition config : Every manual RHCOS installation needs to pass one of the Ignition config files generated by openshift-installer , such as bootstrap.ign , master.ign and worker.ign , to carry out the installation. Important It is not recommended to modify these Ignition config files directly. You can update the manifest files that are wrapped into the Ignition config files, as outlined in examples in the preceding sections. For PXE installations, you pass the Ignition configs on the APPEND line using the coreos.inst.ignition_url= option. For ISO installations, after the ISO boots to the shell prompt, you identify the Ignition config on the coreos-installer command line with the --ignition-url= option. In both cases, only HTTP and HTTPS protocols are supported. Live install Ignition config : This type must be created manually and should be avoided if possible, as it is not supported by Red Hat. With this method, the Ignition config passes to the live install medium, runs immediately upon booting, and performs setup tasks before and/or after the RHCOS system installs to disk. This method should only be used for performing tasks that must be done once and not applied again later, such as with advanced partitioning that cannot be done using a machine config. For PXE or ISO boots, you can create the Ignition config and APPEND the ignition.config.url= option to identify the location of the Ignition config. You also need to append ignition.firstboot ignition.platform.id=metal or the ignition.config.url option will be ignored. 19.1.11.3.3.1. Embedding a live install Ignition config in the RHCOS ISO You can embed a live install Ignition config directly in an RHCOS ISO image. When the ISO image is booted, the embedded config will be applied automatically. Procedure Download the coreos-installer binary from the following image mirror page: https://mirror.openshift.com/pub/openshift-v4/clients/coreos-installer/latest/ . Retrieve the RHCOS ISO image and the Ignition config file, and copy them into an accessible directory, such as /mnt : # cp rhcos-<version>-live.x86_64.iso bootstrap.ign /mnt/ # chmod 644 /mnt/rhcos-<version>-live.x86_64.iso Run the following command to embed the Ignition config into the ISO: # ./coreos-installer iso ignition embed -i /mnt/bootstrap.ign \ /mnt/rhcos-<version>-live.x86_64.iso You can now use that ISO to install RHCOS using the specified live install Ignition config. Important Using coreos-installer iso ignition embed to embed a file generated by openshift-installer , such as bootstrap.ign , master.ign and worker.ign , is unsupported and not recommended. To show the contents of the embedded Ignition config and direct it into a file, run: # ./coreos-installer iso ignition show /mnt/rhcos-<version>-live.x86_64.iso > mybootstrap.ign # diff -s bootstrap.ign mybootstrap.ign Example output Files bootstrap.ign and mybootstrap.ign are identical To remove the Ignition config and return the ISO to its pristine state so you can reuse it, run: # ./coreos-installer iso ignition remove /mnt/rhcos-<version>-live.x86_64.iso You can now embed another Ignition config into the ISO or use the ISO in its pristine state. 19.1.11.3.4. Advanced RHCOS installation reference This section illustrates the networking configuration and other advanced options that allow you to modify the Red Hat Enterprise Linux CoreOS (RHCOS) manual installation process. The following tables describe the kernel arguments and command-line options you can use with the RHCOS live installer and the coreos-installer command. 19.1.11.3.4.1. Networking and bonding options for ISO installations If you install RHCOS from an ISO image, you can add kernel arguments manually when you boot the image to configure networking for a node. If no networking arguments are specified, DHCP is activated in the initramfs when RHCOS detects that networking is required to fetch the Ignition config file. Important When adding networking arguments manually, you must also add the rd.neednet=1 kernel argument to bring the network up in the initramfs. The following table provides examples for configuring networking and bonding on your RHCOS nodes for ISO installations. The examples describe how to use the ip= , nameserver= , and bond= kernel arguments. Note Ordering is important when adding the kernel arguments: ip= , nameserver= , and then bond= . The networking options are passed to the dracut tool during system boot. For more information about the networking options supported by dracut , see the dracut.cmdline manual page. The following information provides examples for configuring networking on your RHCOS nodes for ISO installations. The examples describe how to use the ip= and nameserver= kernel arguments. Note Ordering is important when adding the kernel arguments: ip= and nameserver= . The networking options are passed to the dracut tool during system boot. For more information about the networking options supported by dracut , see the dracut.cmdline manual page. The following examples are the networking options for ISO installation. Configuring DHCP or static IP addresses To configure an IP address, either use DHCP ( ip=dhcp ) or set an individual static IP address ( ip=<host_ip> ). If setting a static IP, you must then identify the DNS server IP address ( nameserver=<dns_ip> ) on each node. The following example sets: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The hostname to core0.example.com The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41 Note When you use DHCP to configure IP addressing for the RHCOS machines, the machines also obtain the DNS server information through DHCP. For DHCP-based deployments, you can define the DNS server address that is used by the RHCOS nodes through your DHCP server configuration. Configuring an IP address without a static hostname You can configure an IP address without assigning a static hostname. If a static hostname is not set by the user, it will be picked up and automatically set by a reverse DNS lookup. To configure an IP address without a static hostname refer to the following example: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41 Specifying multiple network interfaces You can specify multiple network interfaces by setting multiple ip= entries. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring default gateway and route Optional: You can configure routes to additional networks by setting an rd.route= value. Note When you configure one or multiple networks, one default gateway is required. If the additional network gateway is different from the primary network gateway, the default gateway must be the primary network gateway. Run the following command to configure the default gateway: ip=::10.10.10.254:::: Enter the following command to configure the route for the additional network: rd.route=20.20.20.0/24:20.20.20.254:enp2s0 Disabling DHCP on a single interface You can disable DHCP on a single interface, such as when there are two or more network interfaces and only one interface is being used. In the example, the enp1s0 interface has a static networking configuration and DHCP is disabled for enp2s0 , which is not used: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none Combining DHCP and static IP configurations You can combine DHCP and static IP configurations on systems with multiple network interfaces, for example: ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring VLANs on individual interfaces Optional: You can configure VLANs on individual interfaces by using the vlan= parameter. To configure a VLAN on a network interface and use a static IP address, run the following command: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0 To configure a VLAN on a network interface and to use DHCP, run the following command: ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0 Providing multiple DNS servers You can provide multiple DNS servers by adding a nameserver= entry for each server, for example: nameserver=1.1.1.1 nameserver=8.8.8.8 Bonding multiple network interfaces to a single interface Optional: You can bond multiple network interfaces to a single interface by using the bond= option. Refer to the following examples: The syntax for configuring a bonded interface is: bond=name[:network_interfaces][:options] name is the bonding device name ( bond0 ), network_interfaces represents a comma-separated list of physical (ethernet) interfaces ( em1,em2 ), and options is a comma-separated list of bonding options. Enter modinfo bonding to see available options. When you create a bonded interface using bond= , you must specify how the IP address is assigned and other information for the bonded interface. To configure the bonded interface to use DHCP, set the bond's IP address to dhcp . For example: bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp To configure the bonded interface to use a static IP address, enter the specific IP address you want and related information. For example: bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none Bonding multiple network interfaces to a single interface Optional: You can configure VLANs on bonded interfaces by using the vlan= parameter and to use DHCP, for example: ip=bond0.100:dhcp bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0 Use the following example to configure the bonded interface with a VLAN and to use a static IP address: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0.100:none bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0 Using network teaming Optional: You can use a network teaming as an alternative to bonding by using the team= parameter: The syntax for configuring a team interface is: team=name[:network_interfaces] name is the team device name ( team0 ) and network_interfaces represents a comma-separated list of physical (ethernet) interfaces ( em1, em2 ). Teaming is planned to be deprecated when RHCOS switches to an upcoming version of RHEL. For more information, see this Red Hat Knowledgebase Article . Use the following example to configure a network team: team=team0:em1,em2 ip=team0:dhcp 19.1.11.3.4.2. coreos-installer options for ISO installations You can install RHCOS by running coreos-installer install <options> <device> at the command prompt, after booting into the RHCOS live environment from an ISO image. The following table shows the subcommands, options, and arguments you can pass to the coreos-installer command. Table 19.9. coreos-installer subcommands, command-line options, and arguments coreos-installer install subcommand Subcommand Description USD coreos-installer install <options> <device> Embed an Ignition config in an ISO image. coreos-installer install subcommand options Option Description -u , --image-url <url> Specify the image URL manually. -f , --image-file <path> Specify a local image file manually. Used for debugging. -i, --ignition-file <path> Embed an Ignition config from a file. -I , --ignition-url <URL> Embed an Ignition config from a URL. --ignition-hash <digest> Digest type-value of the Ignition config. -p , --platform <name> Override the Ignition platform ID for the installed system. --append-karg <arg>... Append a default kernel argument to the installed system. --delete-karg <arg>... Delete a default kernel argument from the installed system. -n , --copy-network Copy the network configuration from the install environment. Important The --copy-network option only copies networking configuration found under /etc/NetworkManager/system-connections . In particular, it does not copy the system hostname. --network-dir <path> For use with -n . Default is /etc/NetworkManager/system-connections/ . --save-partlabel <lx>.. Save partitions with this label glob. --save-partindex <id>... Save partitions with this number or range. --insecure Skip signature verification. --insecure-ignition Allow Ignition URL without HTTPS or hash. --architecture <name> Target CPU architecture. Default is x86_64 . --preserve-on-error Do not clear partition table on error. -h , --help Print help information. coreos-install install subcommand argument Argument Description <device> The destination device. coreos-installer ISO Ignition subcommands Subcommand Description USD coreos-installer iso ignition embed <options> --ignition-file <file_path> <ISO_image> Embed an Ignition config in an ISO image. coreos-installer iso ignition show <options> <ISO_image> Show the embedded Ignition config from an ISO image. coreos-installer iso ignition remove <options> <ISO_image> Remove the embedded Ignition config from an ISO image. coreos-installer ISO Ignition subcommand options Option Description -f , --force Overwrite an existing Ignition config. -i , --ignition-file <path> The Ignition config to be used. Default is stdin . -o , --output <path> Write the ISO to a new output file. -h , --help Print help information. coreos-installer PXE Ignition subcommands Subcommand Description Note that not all of these options are accepted by all subcommands. coreos-installer pxe ignition wrap <options> Wrap an Ignition config in an image. coreos-installer pxe ignition unwrap <options> <image_name> Show the wrapped Ignition config in an image. coreos-installer PXE Ignition subcommand options Option Description Note that not all of these options are accepted by all subcommands. -i , --ignition-file <path> The Ignition config to be used. Default is stdin . -o, --output <path> Write the ISO to a new output file. -h , --help Print help information. 19.1.11.3.4.3. coreos.inst boot options for ISO or PXE installations You can automatically invoke coreos-installer options at boot time by passing coreos.inst boot arguments to the RHCOS live installer. These are provided in addition to the standard boot arguments. For ISO installations, the coreos.inst options can be added by interrupting the automatic boot at the bootloader menu. You can interrupt the automatic boot by pressing TAB while the RHEL CoreOS (Live) menu option is highlighted. For PXE or iPXE installations, the coreos.inst options must be added to the APPEND line before the RHCOS live installer is booted. The following table shows the RHCOS live installer coreos.inst boot options for ISO and PXE installations. Table 19.10. coreos.inst boot options Argument Description coreos.inst.install_dev Required. The block device on the system to install to. It is recommended to use the full path, such as /dev/sda , although sda is allowed. coreos.inst.ignition_url Optional: The URL of the Ignition config to embed into the installed system. If no URL is specified, no Ignition config is embedded. Only HTTP and HTTPS protocols are supported. coreos.inst.save_partlabel Optional: Comma-separated labels of partitions to preserve during the install. Glob-style wildcards are permitted. The specified partitions do not need to exist. coreos.inst.save_partindex Optional: Comma-separated indexes of partitions to preserve during the install. Ranges m-n are permitted, and either m or n can be omitted. The specified partitions do not need to exist. coreos.inst.insecure Optional: Permits the OS image that is specified by coreos.inst.image_url to be unsigned. coreos.inst.image_url Optional: Download and install the specified RHCOS image. This argument should not be used in production environments and is intended for debugging purposes only. While this argument can be used to install a version of RHCOS that does not match the live media, it is recommended that you instead use the media that matches the version you want to install. If you are using coreos.inst.image_url , you must also use coreos.inst.insecure . This is because the bare-metal media are not GPG-signed for OpenShift Container Platform. Only HTTP and HTTPS protocols are supported. coreos.inst.skip_reboot Optional: The system will not reboot after installing. After the install finishes, you will receive a prompt that allows you to inspect what is happening during installation. This argument should not be used in production environments and is intended for debugging purposes only. coreos.inst.platform_id Optional: The Ignition platform ID of the platform the RHCOS image is being installed on. Default is metal . This option determines whether or not to request an Ignition config from the cloud provider, such as VMware. For example: coreos.inst.platform_id=vmware . ignition.config.url Optional: The URL of the Ignition config for the live boot. For example, this can be used to customize how coreos-installer is invoked, or to run code before or after the installation. This is different from coreos.inst.ignition_url , which is the Ignition config for the installed system. 19.1.11.4. Updating the bootloader using bootupd To update the bootloader by using bootupd , you must either install bootupd on RHCOS machines manually or provide a machine config with the enabled systemd unit. Unlike grubby or other bootloader tools, bootupd does not manage kernel space configuration such as passing kernel arguments. After you have installed bootupd , you can manage it remotely from the OpenShift Container Platform cluster. Note It is recommended that you use bootupd only on bare metal or virtualized hypervisor installations, such as for protection against the BootHole vulnerability. Manual install method You can manually install bootupd by using the bootctl command-line tool. Inspect the system status: # bootupctl status Example output Component EFI Installed: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64 Update: At latest version RHCOS images created without bootupd installed on them require an explicit adoption phase. If the system status is Adoptable , perform the adoption: # bootupctl adopt-and-update Example output Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64 If an update is available, apply the update so that the changes take effect on the reboot: # bootupctl update Example output Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64 Machine config method Another way to enable bootupd is by providing a machine config. Provide a machine config file with the enabled systemd unit, as shown in the following example: Example output variant: rhcos version: 1.1.0 systemd: units: - name: custom-bootupd-auto.service enabled: true contents: | [Unit] Description=Bootupd automatic update [Service] ExecStart=/usr/bin/bootupctl update RemainAfterExit=yes [Install] WantedBy=multi-user.target 19.1.12. Waiting for the bootstrap process to complete The OpenShift Container Platform bootstrap process begins after the cluster nodes first boot into the persistent RHCOS environment that has been installed to disk. The configuration information provided through the Ignition config files is used to initialize the bootstrap process and install OpenShift Container Platform on the machines. You must wait for the bootstrap process to complete. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have obtained the installation program and generated the Ignition config files for your cluster. You installed RHCOS on your cluster machines and provided the Ignition config files that the OpenShift Container Platform installation program generated. Your machines have direct internet access or have an HTTP or HTTPS proxy available. Procedure Monitor the bootstrap process: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... INFO API v1.22.1 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. After bootstrap process is complete, remove the bootstrap machine from the load balancer. Important You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the bootstrap machine itself. 19.1.13. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 19.1.14. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.22.1 master-1 Ready master 63m v1.22.1 master-2 Ready master 64m v1.22.1 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.22.1 master-1 Ready master 73m v1.22.1 master-2 Ready master 74m v1.22.1 worker-0 Ready worker 11m v1.22.1 worker-1 Ready worker 11m v1.22.1 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 19.1.15. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.9.0 True False False 19m baremetal 4.9.0 True False False 37m cloud-credential 4.9.0 True False False 40m cluster-autoscaler 4.9.0 True False False 37m config-operator 4.9.0 True False False 38m console 4.9.0 True False False 26m csi-snapshot-controller 4.9.0 True False False 37m dns 4.9.0 True False False 37m etcd 4.9.0 True False False 36m image-registry 4.9.0 True False False 31m ingress 4.9.0 True False False 30m insights 4.9.0 True False False 31m kube-apiserver 4.9.0 True False False 26m kube-controller-manager 4.9.0 True False False 36m kube-scheduler 4.9.0 True False False 36m kube-storage-version-migrator 4.9.0 True False False 37m machine-api 4.9.0 True False False 29m machine-approver 4.9.0 True False False 37m machine-config 4.9.0 True False False 36m marketplace 4.9.0 True False False 37m monitoring 4.9.0 True False False 29m network 4.9.0 True False False 38m node-tuning 4.9.0 True False False 37m openshift-apiserver 4.9.0 True False False 32m openshift-controller-manager 4.9.0 True False False 30m openshift-samples 4.9.0 True False False 32m operator-lifecycle-manager 4.9.0 True False False 37m operator-lifecycle-manager-catalog 4.9.0 True False False 37m operator-lifecycle-manager-packageserver 4.9.0 True False False 32m service-ca 4.9.0 True False False 38m storage 4.9.0 True False False 37m Configure the Operators that are not available. 19.1.15.1. Disabling the default OperatorHub sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, delete, disable, and enable individual sources. 19.1.15.2. Image registry removed during installation On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed . This allows openshift-installer to complete installations on these platform types. After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed . Note The Prometheus console provides an ImageRegistryRemoved alert, for example: "Image Registry has been removed. ImageStreamTags , BuildConfigs and DeploymentConfigs which reference ImageStreamTags may not work as expected. Please configure storage and update the config to Managed state by editing configs.imageregistry.operator.openshift.io." 19.1.15.3. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 19.1.15.3.1. Configuring registry storage for bare metal and other manual installations As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have a cluster that uses manually-provisioned Red Hat Enterprise Linux CoreOS (RHCOS) nodes, such as bare metal. You have provisioned persistent storage for your cluster, such as Red Hat OpenShift Container Storage. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have 100Gi capacity. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When using shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resourses found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.9 True False False 6h50m Ensure that your registry is set to managed to enable building and pushing of images. Run: Then, change the line to 19.1.15.3.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 19.1.15.3.3. Configuring block registry storage To allow the image registry to use block storage types during upgrades as a cluster administrator, you can use the Recreate rollout strategy. Important Block storage volumes are supported but not recommended for use with the image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica. Procedure To set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy and runs with only one ( 1 ) replica: USD oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. Edit the registry configuration so that it references the correct PVC. 19.1.16. Completing installation on user-provisioned infrastructure After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide. Prerequisites Your control plane has initialized. You have completed the initial Operator configuration. Procedure Confirm that all the cluster components are online with the following command: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.9.0 True False False 19m baremetal 4.9.0 True False False 37m cloud-credential 4.9.0 True False False 40m cluster-autoscaler 4.9.0 True False False 37m config-operator 4.9.0 True False False 38m console 4.9.0 True False False 26m csi-snapshot-controller 4.9.0 True False False 37m dns 4.9.0 True False False 37m etcd 4.9.0 True False False 36m image-registry 4.9.0 True False False 31m ingress 4.9.0 True False False 30m insights 4.9.0 True False False 31m kube-apiserver 4.9.0 True False False 26m kube-controller-manager 4.9.0 True False False 36m kube-scheduler 4.9.0 True False False 36m kube-storage-version-migrator 4.9.0 True False False 37m machine-api 4.9.0 True False False 29m machine-approver 4.9.0 True False False 37m machine-config 4.9.0 True False False 36m marketplace 4.9.0 True False False 37m monitoring 4.9.0 True False False 29m network 4.9.0 True False False 38m node-tuning 4.9.0 True False False 37m openshift-apiserver 4.9.0 True False False 32m openshift-controller-manager 4.9.0 True False False 30m openshift-samples 4.9.0 True False False 32m operator-lifecycle-manager 4.9.0 True False False 37m operator-lifecycle-manager-catalog 4.9.0 True False False 37m operator-lifecycle-manager-packageserver 4.9.0 True False False 32m service-ca 4.9.0 True False False 38m storage 4.9.0 True False False 37m Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 30m0s for the cluster to initialize... The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Confirm that the Kubernetes API server is communicating with the pods. To view a list of all pods, use the following command: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ... View the logs for a pod that is listed in the output of the command by using the following command: USD oc logs <pod_name> -n <namespace> 1 1 Specify the pod name and namespace, as shown in the output of the command. If the pod logs display, the Kubernetes API server can communicate with the cluster machines. For an installation with Fibre Channel Protocol (FCP), additional steps are required to enable multipathing. Do not enable multipathing during installation. See "Enabling multipathing with kernel arguments on RHCOS" in the Post-installation machine configuration tasks documentation for more information. 19.1.17. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.9, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 19.1.18. steps Customize your cluster . If necessary, you can opt out of remote health reporting . Set up your registry and configure registry storage .
[ "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; master0.ocp4.example.com. IN A 192.168.1.97 5 master1.ocp4.example.com. IN A 192.168.1.98 6 master2.ocp4.example.com. IN A 192.168.1.99 7 ; worker0.ocp4.example.com. IN A 192.168.1.11 8 worker1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR master0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR master1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR master2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR worker0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR worker1.ocp4.example.com. 8 ; ;EOF", "global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 frontend stats bind *:1936 mode http log global maxconn 10 stats enable stats hide-version stats refresh 30s stats show-node stats show-desc Stats for ocp4 cluster 1 stats auth admin:ocp4 stats uri /stats listen api-server-6443 2 bind *:6443 mode tcp server bootstrap bootstrap.ocp4.example.com:6443 check inter 1s backup 3 server master0 master0.ocp4.example.com:6443 check inter 1s server master1 master1.ocp4.example.com:6443 check inter 1s server master2 master2.ocp4.example.com:6443 check inter 1s listen machine-config-server-22623 4 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 5 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 6 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 7 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s", "dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1", "api.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>", "api-int.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>", "random.apps.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>", "console-openshift-console.apps.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>", "bootstrap.ocp4.example.com. 0 IN A 192.168.1.96", "dig +noall +answer @<nameserver_ip> -x 192.168.1.5", "5.1.168.192.in-addr.arpa. 0 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 0 IN PTR api.ocp4.example.com. 2", "dig +noall +answer @<nameserver_ip> -x 192.168.1.96", "96.1.168.192.in-addr.arpa. 0 IN PTR bootstrap.ocp4.example.com.", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "mkdir <installation_directory>", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OpenShiftSDN serviceNetwork: 11 - 172.30.0.0/16 platform: none: {} 12 fips: false 13 pullSecret: '{\"auths\": ...}' 14 sshKey: 'ssh-ed25519 AAAA...' 15", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "compute: - name: worker platform: {} replicas: 0", "./openshift-install create manifests --dir <installation_directory> 1", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "sha512sum <installation_directory>/bootstrap.ign", "curl -k http://<HTTP_server>/bootstrap.ign 1", "% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa", "openshift-install coreos print-stream-json | grep '\\.iso[^.]'", "\"location\": \"<url>/art/storage/releases/rhcos-4.9-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.9-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.9-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.9/<release>/x86_64/rhcos-<release>-live.x86_64.iso\",", "sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2", "sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b", "curl -k http://<HTTP_server>/bootstrap.ign 1", "% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa", "openshift-install coreos print-stream-json | grep -Eo '\"https.*(kernel-|initramfs.|rootfs.)\\w+(\\.img)?\"'", "\"<url>/art/storage/releases/rhcos-4.9-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64\" \"<url>/art/storage/releases/rhcos-4.9-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.9-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.9-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le\" \"<url>/art/storage/releases/rhcos-4.9-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.9-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.9-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x\" \"<url>/art/storage/releases/rhcos-4.9-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.9-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.9/<release>/x86_64/rhcos-<release>-live-kernel-x86_64\" \"<url>/art/storage/releases/rhcos-4.9/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img\" \"<url>/art/storage/releases/rhcos-4.9/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img\"", "DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 2 3", "kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 1 2 initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img 3 boot", "sudo coreos-installer install --copy-network --ignition-url=http://host/worker.ign /dev/sda", "openshift-install create manifests --dir <installation_directory>", "variant: openshift version: 4.9.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true", "butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml", "openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partlabel 'data*' /dev/sda", "coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 6 /dev/sda", "coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 5- /dev/sda", "coreos.inst.save_partlabel=data*", "coreos.inst.save_partindex=5-", "coreos.inst.save_partindex=6", "cp rhcos-<version>-live.x86_64.iso bootstrap.ign /mnt/ chmod 644 /mnt/rhcos-<version>-live.x86_64.iso", "./coreos-installer iso ignition embed -i /mnt/bootstrap.ign /mnt/rhcos-<version>-live.x86_64.iso", "./coreos-installer iso ignition show /mnt/rhcos-<version>-live.x86_64.iso > mybootstrap.ign", "diff -s bootstrap.ign mybootstrap.ign", "Files bootstrap.ign and mybootstrap.ign are identical", "./coreos-installer iso ignition remove /mnt/rhcos-<version>-live.x86_64.iso", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=::10.10.10.254::::", "rd.route=20.20.20.0/24:20.20.20.254:enp2s0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none", "ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0", "ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0", "nameserver=1.1.1.1 nameserver=8.8.8.8", "bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp", "bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none", "ip=bond0.100:dhcp bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0.100:none bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0", "team=team0:em1,em2 ip=team0:dhcp", "# bootupctl status", "Component EFI Installed: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64 Update: At latest version", "# bootupctl adopt-and-update", "Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64", "# bootupctl update", "Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64", "variant: rhcos version: 1.1.0 systemd: units: - name: custom-bootupd-auto.service enabled: true contents: | [Unit] Description=Bootupd automatic update [Service] ExecStart=/usr/bin/bootupctl update RemainAfterExit=yes [Install] WantedBy=multi-user.target", "./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2", "INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.22.1 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.22.1 master-1 Ready master 63m v1.22.1 master-2 Ready master 64m v1.22.1", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.22.1 master-1 Ready master 73m v1.22.1 master-2 Ready master 74m v1.22.1 worker-0 Ready worker 11m v1.22.1 worker-1 Ready worker 11m v1.22.1", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.9.0 True False False 19m baremetal 4.9.0 True False False 37m cloud-credential 4.9.0 True False False 40m cluster-autoscaler 4.9.0 True False False 37m config-operator 4.9.0 True False False 38m console 4.9.0 True False False 26m csi-snapshot-controller 4.9.0 True False False 37m dns 4.9.0 True False False 37m etcd 4.9.0 True False False 36m image-registry 4.9.0 True False False 31m ingress 4.9.0 True False False 30m insights 4.9.0 True False False 31m kube-apiserver 4.9.0 True False False 26m kube-controller-manager 4.9.0 True False False 36m kube-scheduler 4.9.0 True False False 36m kube-storage-version-migrator 4.9.0 True False False 37m machine-api 4.9.0 True False False 29m machine-approver 4.9.0 True False False 37m machine-config 4.9.0 True False False 36m marketplace 4.9.0 True False False 37m monitoring 4.9.0 True False False 29m network 4.9.0 True False False 38m node-tuning 4.9.0 True False False 37m openshift-apiserver 4.9.0 True False False 32m openshift-controller-manager 4.9.0 True False False 30m openshift-samples 4.9.0 True False False 32m operator-lifecycle-manager 4.9.0 True False False 37m operator-lifecycle-manager-catalog 4.9.0 True False False 37m operator-lifecycle-manager-packageserver 4.9.0 True False False 32m service-ca 4.9.0 True False False 38m storage 4.9.0 True False False 37m", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resourses found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim:", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.9 True False False 6h50m", "oc edit configs.imageregistry/cluster", "managementState: Removed", "managementState: Managed", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.9.0 True False False 19m baremetal 4.9.0 True False False 37m cloud-credential 4.9.0 True False False 40m cluster-autoscaler 4.9.0 True False False 37m config-operator 4.9.0 True False False 38m console 4.9.0 True False False 26m csi-snapshot-controller 4.9.0 True False False 37m dns 4.9.0 True False False 37m etcd 4.9.0 True False False 36m image-registry 4.9.0 True False False 31m ingress 4.9.0 True False False 30m insights 4.9.0 True False False 31m kube-apiserver 4.9.0 True False False 26m kube-controller-manager 4.9.0 True False False 36m kube-scheduler 4.9.0 True False False 36m kube-storage-version-migrator 4.9.0 True False False 37m machine-api 4.9.0 True False False 29m machine-approver 4.9.0 True False False 37m machine-config 4.9.0 True False False 36m marketplace 4.9.0 True False False 37m monitoring 4.9.0 True False False 29m network 4.9.0 True False False 38m node-tuning 4.9.0 True False False 37m openshift-apiserver 4.9.0 True False False 32m openshift-controller-manager 4.9.0 True False False 30m openshift-samples 4.9.0 True False False 32m operator-lifecycle-manager 4.9.0 True False False 37m operator-lifecycle-manager-catalog 4.9.0 True False False 37m operator-lifecycle-manager-packageserver 4.9.0 True False False 32m service-ca 4.9.0 True False False 38m storage 4.9.0 True False False 37m", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m", "oc logs <pod_name> -n <namespace> 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/installing/installing-on-any-platform
Chapter 1. Overview
Chapter 1. Overview Cockpit is a system administration tool that provides a user interface for monitoring and administering servers through a web browser. It allows you to monitor current values and adjust limits on system resources, control life cycle on container instances, and manipulate container images. Here are a few important facts about Cockpit: Cockpit does not add a layer of other functionalities that are not present on your systems. It exposes user interface elements that enable you to interact with the system. Cockpit does not take control over your servers, in a way that when you configure something from Cockpit, you can only manage it from there. You can effectively move away from Cockpit to the command-line and come back to it at any point. Cockpit does not require configuration or infrastructure, and once you install it, it is ready for use. You could, however, configure it to make use of the authentication infrastructure that is available to you, for example a single sign-on system like Kerberos. Cockpit has zero memory and process footprint on the server when not in use. Cockpit does not store data or policy. This also means it does not have its own users. The users from the systems can authenticate in Cockpit using their system credentials and they keep the same permissions. Cockpit dynamically updates itself to reflect the current state of the server, within a time frame of a few seconds. Cockpit is not intended for configuration management. This means that Cockpit itself does not have a predefined template or state for the server that it then imposes on the server. Cockpit can interact with other configuration management systems or custom tools that are manipulating server configuration. This document provides instructions on how to install and enable Cockpit so you can monitor your servers, describes basic configuration, and walks you through the interface. Both Red Hat Enterprise Linux and Red Hat Enterprise Linux Atomic Host can be used for the role of a Cockpit server and that of a secondary server. In this document, all monitored systems are Atomic, but the instructions also cover how to set up Red Hat Enterprise Linux as a primary server. Note Cockpit does not yet have support for Kubernetes on Red Hat Enterprise Linux or Red Hat Enterprise Linux Atomic Host servers.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_atomic_host/7/html/getting_started_with_cockpit/overview
Chapter 6. Running Ansible playbooks with automation content navigator
Chapter 6. Running Ansible playbooks with automation content navigator As a content creator, you can execute your Ansible playbooks with automation content navigator and interactively delve into the results of each play and task to verify or troubleshoot the playbook. You can also execute your Ansible playbooks inside an execution environment and without an execution environment to compare and troubleshoot any problems. 6.1. Executing a playbook from automation content navigator You can run Ansible playbooks with the automation content navigator text-based user interface to follow the execution of the tasks and delve into the results of each task. Prerequisites A playbook. A valid inventory file if not using localhost or an inventory plugin. Procedure Start automation content navigator USD ansible-navigator Run the playbook. USD :run Optional: type ansible-navigator run simple-playbook.yml -i inventory.yml to run the playbook. Verify or add the inventory and any other command line parameters. INVENTORY OR PLAYBOOK NOT FOUND, PLEASE CONFIRM THE FOLLOWING ───────────────────────────────────────────────────────────────────────── Path to playbook: /home/ansible-navigator_demo/simple_playbook.yml Inventory source: /home/ansible-navigator-demo/inventory.yml Additional command line parameters: Please provide a value (optional) ────────────────────────────────────────────────────────────────────────── Submit Cancel Tab to Submit and hit Enter. You should see the tasks executing. Type the number to a play to step into the play results, or type :<number> for numbers above 9. Notice failed tasks show up in red if you have colors enabled for automation content navigator. Type the number to a task to review the task results, or type :<number> for numbers above 9. Optional: type :doc bring up the documentation for the module or plugin used in the task to aid in troubleshooting. ANSIBLE.BUILTIN.PACKAGE_FACTS (MODULE) 0│--- 1│doc: 2│ author: 3│ - Matthew Jones (@matburt) 4│ - Brian Coca (@bcoca) 5│ - Adam Miller (@maxamillion) 6│ collection: ansible.builtin 7│ description: 8│ - Return information about installed packages as facts. <... output omitted ...> 11│ module: package_facts 12│ notes: 13│ - Supports C(check_mode). 14│ options: 15│ manager: 16│ choices: 17│ - auto 18│ - rpm 19│ - apt 20│ - portage 21│ - pkg 22│ - pacman <... output truncated ...> Additional resources ansible-playbook Ansible playbooks 6.2. Reviewing playbook results with an automation content navigator artifact file Automation content navigator saves the results of the playbook run in a JSON artifact file. You can use this file to share the playbook results with someone else, save it for security or compliance reasons, or review and troubleshoot later. You only need the artifact file to review the playbook run. You do not need access to the playbook itself or inventory access. Prerequisites A automation content navigator artifact JSON file from a playbook run. Procedure Start automation content navigator with the artifact file. USD ansible-navigator replay simple_playbook_artifact.json Review the playbook results that match when the playbook ran. You can now type the number to the plays and tasks to step into each to review the results, as you would after executing the playbook. Additional resources ansible-playbook Ansible playbooks
[ "ansible-navigator", ":run", "INVENTORY OR PLAYBOOK NOT FOUND, PLEASE CONFIRM THE FOLLOWING ───────────────────────────────────────────────────────────────────────── Path to playbook: /home/ansible-navigator_demo/simple_playbook.yml Inventory source: /home/ansible-navigator-demo/inventory.yml Additional command line parameters: Please provide a value (optional) ────────────────────────────────────────────────────────────────────────── Submit Cancel", "ANSIBLE.BUILTIN.PACKAGE_FACTS (MODULE) 0│--- 1│doc: 2│ author: 3│ - Matthew Jones (@matburt) 4│ - Brian Coca (@bcoca) 5│ - Adam Miller (@maxamillion) 6│ collection: ansible.builtin 7│ description: 8│ - Return information about installed packages as facts. <... output omitted ...> 11│ module: package_facts 12│ notes: 13│ - Supports C(check_mode). 14│ options: 15│ manager: 16│ choices: 17│ - auto 18│ - rpm 19│ - apt 20│ - portage 21│ - pkg 22│ - pacman <... output truncated ...>", "ansible-navigator replay simple_playbook_artifact.json" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/automation_content_navigator_creator_guide/assembly-execute-playbooks-navigator_ansible-navigator
Chapter 7. Debezium Connector for Oracle
Chapter 7. Debezium Connector for Oracle Debezium's Oracle connector captures and records row-level changes that occur in databases on an Oracle server, including tables that are added while the connector is running. You can configure the connector to emit change events for specific subsets of schemas and tables, or to ignore, mask, or truncate values in specific columns. For information about the Oracle Database versions that are compatible with this connector, see the Debezium Supported Configurations page . Debezium ingests change events from Oracle by using the native LogMiner database package . Information and procedures for using a Debezium Oracle connector are organized as follows: Section 7.1, "How Debezium Oracle connectors work" Section 7.2, "Descriptions of Debezium Oracle connector data change events" Section 7.3, "How Debezium Oracle connectors map data types" Section 7.4, "Setting up Oracle to work with Debezium" Section 7.5, "Deployment of Debezium Oracle connectors" Section 7.6, "Descriptions of Debezium Oracle connector configuration properties" Section 7.7, "Monitoring Debezium Oracle connector performance" Section 7.8, "Oracle connector frequently asked questions" 7.1. How Debezium Oracle connectors work To optimally configure and run a Debezium Oracle connector, it is helpful to understand how the connector performs snapshots, streams change events, determines Kafka topic names, uses metadata, and implements event buffering. For more information, see the following topics: Section 7.1.1, "How Debezium Oracle connectors perform database snapshots" Section 7.1.2, "Ad hoc snapshots" Section 7.1.3, "Incremental snapshots" Section 7.1.4, "Default names of Kafka topics that receive Debezium Oracle change event records" Section 7.1.6, "How Debezium Oracle connectors expose database schema changes" Section 7.1.7, "Debezium Oracle connector-generated events that represent transaction boundaries" Section 7.1.8, "How the Debezium Oracle connector uses event buffering" 7.1.1. How Debezium Oracle connectors perform database snapshots Typically, the redo logs on an Oracle server are configured to not retain the complete history of the database. As a result, the Debezium Oracle connector cannot retrieve the entire history of the database from the logs. To enable the connector to establish a baseline for the current state of the database, the first time that the connector starts, it performs an initial consistent snapshot of the database. Note If the time needed to complete the initial snapshot exceeds the UNDO_RETENTION time that is set for the database (fifteen minutes, by default), an ORA-01555 exception can occur. For more information about the error, and about the steps that you can take to recover from it, see the Frequently asked questions . You can find more information about snapshots in the following sections: Section 7.1.2, "Ad hoc snapshots" Section 7.1.3, "Incremental snapshots" Default workflow that the Oracle connector uses to perform an initial snapshot The following workflow lists the steps that Debezium takes to create a snapshot. These steps describe the process for a snapshot when the snapshot.mode configuration property is set to its default value, which is initial . You can customize the way that the connector creates snapshots by changig the value of the snapshot.mode property. If you configure a different snapshot mode, the connector completes the snapshot by using a modified version of this workflow. When the snapshot mode is set to the default, the connector completes the following tasks to create a snapshot: Establish a connection to the database. Determine the tables to be captured. By default, the connector captures all tables except those with schemas that exclude them from capture . After the snapshot completes, the connector continues to stream data for the specified tables. If you want the connector to capture data only from specific tables you can direct the connector to capture the data for only a subset of tables or table elements by setting properties such as table.include.list or table.exclude.list . Obtain a ROW SHARE MODE lock on each of the captured tables to prevent structural changes from occurring during creation of the snapshot. Debezium holds the locks for only a short time. Read the current system change number (SCN) position from the server's redo log. Capture the structure of all database tables, or all tables that are designated for capture. The connector persists schema information in its internal database schema history topic. The schema history provides information about the structure that is in effect when a change event occurs. Note By default, the connector captures the schema of every table in the database that is in capture mode, including tables that are not configured for capture. If tables are not configured for capture, the initial snapshot captures only their structure; it does not capture any table data. For more information about why snapshots persist schema information for tables that you did not include in the initial snapshot, see Understanding why initial snapshots capture the schema for all tables . Release the locks obtained in Step 3. Other database clients can now write to any previously locked tables. At the SCN position that was read in Step 4, the connector scans the tables that are designated for capture ( SELECT * FROM ... AS OF SCN 123 ). During the scan, the connector completes the following tasks: Confirms that the table was created before the snapshot began. If the table was created after the snapshot began, the connector skips the table. After the snapshot is complete, and the connector transitions to streaming, it emits change events for any tables that were created after the snapshot began. Produces a read event for each row that is captured from a table. All read events contain the same SCN position, which is the SCN position that was obtained in step 4. Emits each read event to the Kafka topic for the source table. Releases data table locks, if applicable. Record the successful completion of the snapshot in the connector offsets. The resulting initial snapshot captures the current state of each row in the captured tables. From this baseline state, the connector captures subsequent changes as they occur. After the snapshot process begins, if the process is interrupted due to connector failure, rebalancing, or other reasons, the process restarts after the connector restarts. After the connector completes the initial snapshot, it continues streaming from the position that it read in Step 3 so that it does not miss any updates. If the connector stops again for any reason, after it restarts, it resumes streaming changes from where it previously left off. Table 7.1. Settings for snapshot.mode connector configuration property Setting Description always Perform snapshot on each connector start. After the snapshot completes, the connector begins to stream event records for subsequent database changes. initial The connector performs a database snapshot as described in the default workflow for creating an initial snapshot . After the snapshot completes, the connector begins to stream event records for subsequent database changes. initial_only The connector performs a database snapshot and stops before streaming any change event records, not allowing any subsequent change events to be captured. schema_only The connector captures the structure of all relevant tables, performing all of the steps described in the default snapshot workflow , except that it does not create READ events to represent the data set at the point of the connector's start-up (Step 6). schema_only_recovery Set this option to restore a database schema history topic that is lost or corrupted. After a restart, the connector runs a snapshot that rebuilds the topic from the source tables. You can also set the property to periodically prune a database schema history topic that experiences unexpected growth. WARNING: Do not use this mode to perform a snapshot if schema changes were committed to the database after the last connector shutdown. For more information, see snapshot.mode in the table of connector configuration properties. 7.1.1.1. Description of why initial snapshots capture the schema history for all tables The initial snapshot that a connector runs captures two types of information: Table data Information about INSERT , UPDATE , and DELETE operations in tables that are named in the connector's table.include.list property. Schema data DDL statements that describe the structural changes that are applied to tables. Schema data is persisted to both the internal schema history topic, and to the connector's schema change topic, if one is configured. After you run an initial snapshot, you might notice that the snapshot captures schema information for tables that are not designated for capture. By default, initial snapshots are designed to capture schema information for every table that is present in the database, not only from tables that are designated for capture. Connectors require that the table's schema is present in the schema history topic before they can capture a table. By enabling the initial snapshot to capture schema data for tables that are not part of the original capture set, Debezium prepares the connector to readily capture event data from these tables should that later become necessary. If the initial snapshot does not capture a table's schema, you must add the schema to the history topic before the connector can capture data from the table. In some cases, you might want to limit schema capture in the initial snapshot. This can be useful when you want to reduce the time required to complete a snapshot. Or when Debezium connects to the database instance through a user account that has access to multiple logical databases, but you want the connector to capture changes only from tables in a specific logic database. Additional information Capturing data from tables not captured by the initial snapshot (no schema change) Capturing data from tables not captured by the initial snapshot (schema change) Setting the schema.history.internal.store.only.captured.tables.ddl property to specify the tables from which to capture schema information. Setting the schema.history.internal.store.only.captured.databases.ddl property to specify the logical databases from which to capture schema changes. 7.1.1.2. Capturing data from tables not captured by the initial snapshot (no schema change) In some cases, you might want the connector to capture data from a table whose schema was not captured by the initial snapshot. Depending on the connector configuration, the initial snapshot might capture the table schema only for specific tables in the database. If the table schema is not present in the history topic, the connector fails to capture the table, and reports a missing schema error. You might still be able to capture data from the table, but you must perform additional steps to add the table schema. Prerequisites You want to capture data from a table with a schema that the connector did not capture during the initial snapshot. All entries for the table in the transaction log use the same schema. For information about capturing data from a new table that has undergone structural changes, see Section 7.1.1.3, "Capturing data from tables not captured by the initial snapshot (schema change)" . Procedure Stop the connector. Remove the internal database schema history topic that is specified by the schema.history.internal.kafka.topic property . In the connector configuration: Set the snapshot.mode to schema_only_recovery . Set the value of schema.history.internal.store.only.captured.tables.ddl to false . Add the tables that you want the connector to capture to table.include.list . This guarantees that in the future, the connector can reconstruct the schema history for all tables. Restart the connector. The snapshot recovery process rebuilds the schema history based on the current structure of the tables. (Optional) After the snapshot completes, initiate an incremental snapshot to capture existing data for newly added tables along with changes to other tables that occurred while that connector was off-line. (Optional) Reset the snapshot.mode back to schema_only to prevent the connector from initiating recovery after a future restart. 7.1.1.3. Capturing data from tables not captured by the initial snapshot (schema change) If a schema change is applied to a table, records that are committed before the schema change have different structures than those that were committed after the change. When Debezium captures data from a table, it reads the schema history to ensure that it applies the correct schema to each event. If the schema is not present in the schema history topic, the connector is unable to capture the table, and an error results. If you want to capture data from a table that was not captured by the initial snapshot, and the schema of the table was modified, you must add the schema to the history topic, if it is not already available. You can add the schema by running a new schema snapshot, or by running an initial snapshot for the table. Prerequisites You want to capture data from a table with a schema that the connector did not capture during the initial snapshot. A schema change was applied to the table so that the records to be captured do not have a uniform structure. Procedure Initial snapshot captured the schema for all tables ( store.only.captured.tables.ddl was set to false ) Edit the table.include.list property to specify the tables that you want to capture. Restart the connector. Initiate an incremental snapshot if you want to capture existing data from the newly added tables. Initial snapshot did not capture the schema for all tables ( store.only.captured.tables.ddl was set to true ) If the initial snapshot did not save the schema of the table that you want to capture, complete one of the following procedures: Procedure 1: Schema snapshot, followed by incremental snapshot In this procedure, the connector first performs a schema snapshot. You can then initiate an incremental snapshot to enable the connector to synchronize data. Stop the connector. Remove the internal database schema history topic that is specified by the schema.history.internal.kafka.topic property . Clear the offsets in the configured Kafka Connect offset.storage.topic . For more information about how to remove offsets, see the Debezium community FAQ . Warning Removing offsets should be performed only by advanced users who have experience in manipulating internal Kafka Connect data. This operation is potentially destructive, and should be performed only as a last resort. Set values for properties in the connector configuration as described in the following steps: Set the value of the snapshot.mode property to schema_only . Edit the table.include.list to add the tables that you want to capture. Restart the connector. Wait for Debezium to capture the schema of the new and existing tables. Data changes that occurred any tables after the connector stopped are not captured. To ensure that no data is lost, initiate an incremental snapshot . Procedure 2: Initial snapshot, followed by optional incremental snapshot In this procedure the connector performs a full initial snapshot of the database. As with any initial snapshot, in a database with many large tables, running an initial snapshot can be a time-consuming operation. After the snapshot completes, you can optionally trigger an incremental snapshot to capture any changes that occur while the connector is off-line. Stop the connector. Remove the internal database schema history topic that is specified by the schema.history.internal.kafka.topic property . Clear the offsets in the configured Kafka Connect offset.storage.topic . For more information about how to remove offsets, see the Debezium community FAQ . Warning Removing offsets should be performed only by advanced users who have experience in manipulating internal Kafka Connect data. This operation is potentially destructive, and should be performed only as a last resort. Edit the table.include.list to add the tables that you want to capture. Set values for properties in the connector configuration as described in the following steps: Set the value of the snapshot.mode property to initial . (Optional) Set schema.history.internal.store.only.captured.tables.ddl to false . Restart the connector. The connector takes a full database snapshot. After the snapshot completes, the connector transitions to streaming. (Optional) To capture any data that changed while the connector was off-line, initiate an incremental snapshot . 7.1.2. Ad hoc snapshots By default, a connector runs an initial snapshot operation only after it starts for the first time. Following this initial snapshot, under normal circumstances, the connector does not repeat the snapshot process. Any future change event data that the connector captures comes in through the streaming process only. However, in some situations the data that the connector obtained during the initial snapshot might become stale, lost, or incomplete. To provide a mechanism for recapturing table data, Debezium includes an option to perform ad hoc snapshots. The following changes in a database might be cause for performing an ad hoc snapshot: The connector configuration is modified to capture a different set of tables. Kafka topics are deleted and must be rebuilt. Data corruption occurs due to a configuration error or some other problem. You can re-run a snapshot for a table for which you previously captured a snapshot by initiating a so-called ad-hoc snapshot . Ad hoc snapshots require the use of signaling tables . You initiate an ad hoc snapshot by sending a signal request to the Debezium signaling table. When you initiate an ad hoc snapshot of an existing table, the connector appends content to the topic that already exists for the table. If a previously existing topic was removed, Debezium can create a topic automatically if automatic topic creation is enabled. Ad hoc snapshot signals specify the tables to include in the snapshot. The snapshot can capture the entire contents of the database, or capture only a subset of the tables in the database. Also, the snapshot can capture a subset of the contents of the table(s) in the database. You specify the tables to capture by sending an execute-snapshot message to the signaling table. Set the type of the execute-snapshot signal to incremental , and provide the names of the tables to include in the snapshot, as described in the following table: Table 7.2. Example of an ad hoc execute-snapshot signal record Field Default Value type incremental Specifies the type of snapshot that you want to run. Setting the type is optional. Currently, you can request only incremental snapshots. data-collections N/A An array that contains regular expressions matching the fully-qualified names of the table to be snapshotted. The format of the names is the same as for the signal.data.collection configuration option. additional-condition N/A An optional string, which specifies a condition based on the column(s) of the table(s), to capture a subset of the contents of the table(s). surrogate-key N/A An optional string that specifies the column name that the connector uses as the primary key of a table during the snapshot process. Triggering an ad hoc snapshot You initiate an ad hoc snapshot by adding an entry with the execute-snapshot signal type to the signaling table. After the connector processes the message, it begins the snapshot operation. The snapshot process reads the first and last primary key values and uses those values as the start and end point for each table. Based on the number of entries in the table, and the configured chunk size, Debezium divides the table into chunks, and proceeds to snapshot each chunk, in succession, one at a time. Currently, the execute-snapshot action type triggers incremental snapshots only. For more information, see Incremental snapshots . 7.1.3. Incremental snapshots To provide flexibility in managing snapshots, Debezium includes a supplementary snapshot mechanism, known as incremental snapshotting . Incremental snapshots rely on the Debezium mechanism for sending signals to a Debezium connector . In an incremental snapshot, instead of capturing the full state of a database all at once, as in an initial snapshot, Debezium captures each table in phases, in a series of configurable chunks. You can specify the tables that you want the snapshot to capture and the size of each chunk . The chunk size determines the number of rows that the snapshot collects during each fetch operation on the database. The default chunk size for incremental snapshots is 1024 rows. As an incremental snapshot proceeds, Debezium uses watermarks to track its progress, maintaining a record of each table row that it captures. This phased approach to capturing data provides the following advantages over the standard initial snapshot process: You can run incremental snapshots in parallel with streamed data capture, instead of postponing streaming until the snapshot completes. The connector continues to capture near real-time events from the change log throughout the snapshot process, and neither operation blocks the other. If the progress of an incremental snapshot is interrupted, you can resume it without losing any data. After the process resumes, the snapshot begins at the point where it stopped, rather than recapturing the table from the beginning. You can run an incremental snapshot on demand at any time, and repeat the process as needed to adapt to database updates. For example, you might re-run a snapshot after you modify the connector configuration to add a table to its table.include.list property. Incremental snapshot process When you run an incremental snapshot, Debezium sorts each table by primary key and then splits the table into chunks based on the configured chunk size . Working chunk by chunk, it then captures each table row in a chunk. For each row that it captures, the snapshot emits a READ event. That event represents the value of the row when the snapshot for the chunk began. As a snapshot proceeds, it's likely that other processes continue to access the database, potentially modifying table records. To reflect such changes, INSERT , UPDATE , or DELETE operations are committed to the transaction log as per usual. Similarly, the ongoing Debezium streaming process continues to detect these change events and emits corresponding change event records to Kafka. How Debezium resolves collisions among records with the same primary key In some cases, the UPDATE or DELETE events that the streaming process emits are received out of sequence. That is, the streaming process might emit an event that modifies a table row before the snapshot captures the chunk that contains the READ event for that row. When the snapshot eventually emits the corresponding READ event for the row, its value is already superseded. To ensure that incremental snapshot events that arrive out of sequence are processed in the correct logical order, Debezium employs a buffering scheme for resolving collisions. Only after collisions between the snapshot events and the streamed events are resolved does Debezium emit an event record to Kafka. Snapshot window To assist in resolving collisions between late-arriving READ events and streamed events that modify the same table row, Debezium employs a so-called snapshot window . The snapshot windows demarcates the interval during which an incremental snapshot captures data for a specified table chunk. Before the snapshot window for a chunk opens, Debezium follows its usual behavior and emits events from the transaction log directly downstream to the target Kafka topic. But from the moment that the snapshot for a particular chunk opens, until it closes, Debezium performs a de-duplication step to resolve collisions between events that have the same primary key.. For each data collection, the Debezium emits two types of events, and stores the records for them both in a single destination Kafka topic. The snapshot records that it captures directly from a table are emitted as READ operations. Meanwhile, as users continue to update records in the data collection, and the transaction log is updated to reflect each commit, Debezium emits UPDATE or DELETE operations for each change. As the snapshot window opens, and Debezium begins processing a snapshot chunk, it delivers snapshot records to a memory buffer. During the snapshot windows, the primary keys of the READ events in the buffer are compared to the primary keys of the incoming streamed events. If no match is found, the streamed event record is sent directly to Kafka. If Debezium detects a match, it discards the buffered READ event, and writes the streamed record to the destination topic, because the streamed event logically supersede the static snapshot event. After the snapshot window for the chunk closes, the buffer contains only READ events for which no related transaction log events exist. Debezium emits these remaining READ events to the table's Kafka topic. The connector repeats the process for each snapshot chunk. Warning The Debezium connector for Oracle does not support schema changes while an incremental snapshot is running. 7.1.3.1. Triggering an incremental snapshot Currently, the only way to initiate an incremental snapshot is to send an ad hoc snapshot signal to the signaling table on the source database. You submit a signal to the signaling table as SQL INSERT queries. After Debezium detects the change in the signaling table, it reads the signal, and runs the requested snapshot operation. The query that you submit specifies the tables to include in the snapshot, and, optionally, specifies the kind of snapshot operation. Currently, the only valid option for snapshots operations is the default value, incremental . To specify the tables to include in the snapshot, provide a data-collections array that lists the tables or an array of regular expressions used to match tables, for example, {"data-collections": ["public.MyFirstTable", "public.MySecondTable"]} The data-collections array for an incremental snapshot signal has no default value. If the data-collections array is empty, Debezium detects that no action is required and does not perform a snapshot. Note If the name of a table that you want to include in a snapshot contains a dot ( . ) in the name of the database, schema, or table, to add the table to the data-collections array, you must escape each part of the name in double quotes. For example, to include a table that exists in the public schema and that has the name My.Table , use the following format: "public"."My.Table" . Prerequisites Signaling is enabled . A signaling data collection exists on the source database. The signaling data collection is specified in the signal.data.collection property. Using a source signaling channel to trigger an incremental snapshot Send a SQL query to add the ad hoc incremental snapshot request to the signaling table: INSERT INTO <signalTable> (id, type, data) VALUES ( '<id>' , '<snapshotType>' , '{"data-collections": [" <tableName> "," <tableName> "],"type":" <snapshotType> ","additional-condition":" <additional-condition> "}'); For example, INSERT INTO myschema.debezium_signal (id, type, data) 1 values ('ad-hoc-1', 2 'execute-snapshot', 3 '{"data-collections": ["schema1.table1", "schema2.table2"], 4 "type":"incremental"}, 5 "additional-condition":"color=blue"}'); 6 The values of the id , type , and data parameters in the command correspond to the fields of the signaling table . The following table describes the parameters in the example: Table 7.3. Descriptions of fields in a SQL command for sending an incremental snapshot signal to the signaling table Item Value Description 1 myschema.debezium_signal Specifies the fully-qualified name of the signaling table on the source database. 2 ad-hoc-1 The id parameter specifies an arbitrary string that is assigned as the id identifier for the signal request. Use this string to identify logging messages to entries in the signaling table. Debezium does not use this string. Rather, during the snapshot, Debezium generates its own id string as a watermarking signal. 3 execute-snapshot The type parameter specifies the operation that the signal is intended to trigger. 4 data-collections A required component of the data field of a signal that specifies an array of table names or regular expressions to match table names to include in the snapshot. The array lists regular expressions which match tables by their fully-qualified names, using the same format as you use to specify the name of the connector's signaling table in the signal.data.collection configuration property. 5 incremental An optional type component of the data field of a signal that specifies the kind of snapshot operation to run. Currently, the only valid option is the default value, incremental . If you do not specify a value, the connector runs an incremental snapshot. 6 additional-condition An optional string, which specifies a condition based on the column(s) of the table(s), to capture a subset of the contents of the tables. For more information about the additional-condition parameter, see Ad hoc incremental snapshots with additional-condition . Ad hoc incremental snapshots with additional-condition If you want a snapshot to include only a subset of the content in a table, you can modify the signal request by appending an additional-condition parameter to the snapshot signal. The SQL query for a typical snapshot takes the following form: SELECT * FROM <tableName> .... By adding an additional-condition parameter, you append a WHERE condition to the SQL query, as in the following example: SELECT * FROM <tableName> WHERE <additional-condition> .... The following example shows a SQL query to send an ad hoc incremental snapshot request with an additional condition to the signaling table: INSERT INTO <signalTable> (id, type, data) VALUES ( '<id>' , '<snapshotType>' , '{"data-collections": [" <tableName> "," <tableName> "],"type":" <snapshotType> ","additional-condition":" <additional-condition> "}'); For example, suppose you have a products table that contains the following columns: id (primary key) color quantity If you want an incremental snapshot of the products table to include only the data items where color=blue , you can use the following SQL statement to trigger the snapshot: INSERT INTO myschema.debezium_signal (id, type, data) VALUES('ad-hoc-1', 'execute-snapshot', '{"data-collections": ["schema1.products"],"type":"incremental", "additional-condition":"color=blue"}'); The additional-condition parameter also enables you to pass conditions that are based on more than one column. For example, using the products table from the example, you can submit a query that triggers an incremental snapshot that includes the data of only those items for which color=blue and quantity>10 : INSERT INTO myschema.debezium_signal (id, type, data) VALUES('ad-hoc-1', 'execute-snapshot', '{"data-collections": ["schema1.products"],"type":"incremental", "additional-condition":"color=blue AND quantity>10"}'); The following example, shows the JSON for an incremental snapshot event that is captured by a connector. Example: Incremental snapshot event message { "before":null, "after": { "pk":"1", "value":"New data" }, "source": { ... "snapshot":"incremental" 1 }, "op":"r", 2 "ts_ms":"1620393591654", "transaction":null } Item Field name Description 1 snapshot Specifies the type of snapshot operation to run. Currently, the only valid option is the default value, incremental . Specifying a type value in the SQL query that you submit to the signaling table is optional. If you do not specify a value, the connector runs an incremental snapshot. 2 op Specifies the event type. The value for snapshot events is r , signifying a READ operation. 7.1.3.2. Using the Kafka signaling channel to trigger an incremental snapshot You can send a message to the configured Kafka topic to request the connector to run an ad hoc incremental snapshot. The key of the Kafka message must match the value of the topic.prefix connector configuration option. The value of the message is a JSON object with type and data fields. The signal type is execute-snapshot , and the data field must have the following fields: Table 7.4. Execute snapshot data fields Field Default Value type incremental The type of the snapshot to be executed. Currently Debezium supports only the incremental type. See the section for more details. data-collections N/A An array of comma-separated regular expressions that match the fully-qualified names of tables to include in the snapshot. Specify the names by using the same format as is required for the signal.data.collection configuration option. additional-condition N/A An optional string that specifies a condition that the connector evaluates to designate a subset of columns to include in a snapshot. An example of the execute-snapshot Kafka message: Ad hoc incremental snapshots with additional-condition Debezium uses the additional-condition field to select a subset of a table's content. Typically, when Debezium runs a snapshot, it runs a SQL query such as: SELECT * FROM <tableName> ... . When the snapshot request includes an additional-condition , the additional-condition is appended to the SQL query, for example: SELECT * FROM <tableName> WHERE <additional-condition> ... . For example, given a products table with the columns id (primary key), color , and brand , if you want a snapshot to include only content for which color='blue' , when you request the snapshot, you could append an additional-condition statement to filter the content: You can use the additional-condition statement to pass conditions based on multiple columns. For example, using the same products table as in the example, if you want a snapshot to include only the content from the products table for which color='blue' , and brand='MyBrand' , you could send the following request: 7.1.3.3. Stopping an incremental snapshot You can also stop an incremental snapshot by sending a signal to the table on the source database. You submit a stop snapshot signal to the table by sending a SQL INSERT query. After Debezium detects the change in the signaling table, it reads the signal, and stops the incremental snapshot operation if it's in progress. The query that you submit specifies the snapshot operation of incremental , and, optionally, the tables of the current running snapshot to be removed. Prerequisites Signaling is enabled . A signaling data collection exists on the source database. The signaling data collection is specified in the signal.data.collection property. Using a source signaling channel to stop an incremental snapshot Send a SQL query to stop the ad hoc incremental snapshot to the signaling table: INSERT INTO <signalTable> (id, type, data) values ( '<id>' , 'stop-snapshot', '{"data-collections": [" <tableName> "," <tableName> "],"type":"incremental"}'); For example, INSERT INTO myschema.debezium_signal (id, type, data) 1 values ('ad-hoc-1', 2 'stop-snapshot', 3 '{"data-collections": ["schema1.table1", "schema2.table2"], 4 "type":"incremental"}'); 5 The values of the id , type , and data parameters in the signal command correspond to the fields of the signaling table . The following table describes the parameters in the example: Table 7.5. Descriptions of fields in a SQL command for sending a stop incremental snapshot signal to the signaling table Item Value Description 1 myschema.debezium_signal Specifies the fully-qualified name of the signaling table on the source database. 2 ad-hoc-1 The id parameter specifies an arbitrary string that is assigned as the id identifier for the signal request. Use this string to identify logging messages to entries in the signaling table. Debezium does not use this string. 3 stop-snapshot Specifies type parameter specifies the operation that the signal is intended to trigger. 4 data-collections An optional component of the data field of a signal that specifies an array of table names or regular expressions to match table names to remove from the snapshot. The array lists regular expressions which match tables by their fully-qualified names, using the same format as you use to specify the name of the connector's signaling table in the signal.data.collection configuration property. If this component of the data field is omitted, the signal stops the entire incremental snapshot that is in progress. 5 incremental A required component of the data field of a signal that specifies the kind of snapshot operation that is to be stopped. Currently, the only valid option is incremental . If you do not specify a type value, the signal fails to stop the incremental snapshot. 7.1.3.4. Using the Kafka signaling channel to stop an incremental snapshot You can send a signal message to the configured Kafka signaling topic to stop an ad hoc incremental snapshot. The key of the Kafka message must match the value of the topic.prefix connector configuration option. The value of the message is a JSON object with type and data fields. The signal type is stop-snapshot , and the data field must have the following fields: Table 7.6. Execute snapshot data fields Field Default Value type incremental The type of the snapshot to be executed. Currently Debezium supports only the incremental type. See the section for more details. data-collections N/A An optional array of comma-separated regular expressions that match the fully-qualified names of the tables to include in the snapshot. Specify the names by using the same format as is required for the signal.data.collection configuration option. The following example shows a typical stop-snapshot Kafka message: 7.1.4. Default names of Kafka topics that receive Debezium Oracle change event records By default, the Oracle connector writes change events for all INSERT , UPDATE , and DELETE operations that occur in a table to a single Apache Kafka topic that is specific to that table. The connector uses the following convention to name change event topics: topicPrefix.schemaName.tableName The following list provides definitions for the components of the default name: topicPrefix The topic prefix as specified by the topic.prefix connector configuration property. schemaName The name of the schema in which the operation occurred. tableName The name of the table in which the operation occurred. For example, if fulfillment is the server name, inventory is the schema name, and the database contains tables with the names orders , customers , and products , the Debezium Oracle connector emits events to the following Kafka topics, one for each table in the database: The connector applies similar naming conventions to label its internal database schema history topics, schema change topics , and transaction metadata topics . If the default topic name do not meet your requirements, you can configure custom topic names. To configure custom topic names, you specify regular expressions in the logical topic routing SMT. For more information about using the logical topic routing SMT to customize topic naming, see Topic routing . 7.1.5. How Debezium Oracle connectors handle database schema changes When a database client queries a database, the client uses the database's current schema. However, the database schema can be changed at any time, which means that the connector must be able to identify what the schema was at the time each insert, update, or delete operation was recorded. Also, a connector cannot necessarily apply the current schema to every event. If an event is relatively old, it's possible that it was recorded before the current schema was applied. To ensure correct processing of events that occur after a schema change, Oracle includes in the redo log not only the row-level changes that affect the data, but also the DDL statements that are applied to the database. As the connector encounters these DDL statements in the redo log, it parses them and updates an in-memory representation of each table's schema. The connector uses this schema representation to identify the structure of the tables at the time of each insert, update, or delete operation and to produce the appropriate change event. In a separate database schema history Kafka topic, the connector records all DDL statements along with the position in the redo log where each DDL statement appeared. When the connector restarts after either a crash or a graceful stop, it starts reading the redo log from a specific position, that is, from a specific point in time. The connector rebuilds the table structures that existed at this point in time by reading the database schema history Kafka topic and parsing all DDL statements up to the point in the redo log where the connector is starting. This database schema history topic is internal for internal connector use only. Optionally, the connector can also emit schema change events to a different topic that is intended for consumer applications . Additional resources Default names for topics that receive Debezium event records. 7.1.6. How Debezium Oracle connectors expose database schema changes You can configure a Debezium Oracle connector to produce schema change events that describe structural changes that are applied to tables in the database. The connector writes schema change events to a Kafka topic named <serverName> , where topicName is the namespace that is specified in the topic.prefix configuration property. Debezium emits a new message to the schema change topic whenever it streams data from a new table, or when the structure of the table is altered. Messages that the connector sends to the schema change topic contain a payload, and, optionally, also contain the schema of the change event message. The payload of a schema change event message includes the following elements: ddl Provides the SQL CREATE , ALTER , or DROP statement that results in the schema change. databaseName The name of the database to which the statements are applied. The value of databaseName serves as the message key. tableChanges A structured representation of the entire table schema after the schema change. The tableChanges field contains an array that includes entries for each column of the table. Because the structured representation presents data in JSON or Avro format, consumers can easily read messages without first processing them through a DDL parser. Important By default, the connector uses the ALL_TABLES database view to identify the table names to store in the schema history topic. Within that view, the connector can access data only from tables that are available to the user account through which it connects to the database. You can modify settings so that the schema history topic stores a different subset of tables. Use one of the following methods to alter the set of tables that the topic stores: Change the permissions of the account that Debezium uses to access the database so that a different set of tables are visible in the ALL_TABLES view. Set the connector property schema.history.internal.store.only.captured.tables.ddl to true . Important When the connector is configured to capture a table, it stores the history of the table's schema changes not only in the schema change topic, but also in an internal database schema history topic. The internal database schema history topic is for connector use only and it is not intended for direct use by consuming applications. Ensure that applications that require notifications about schema changes consume that information only from the schema change topic. Important Never partition the database schema history topic. For the database schema history topic to function correctly, it must maintain a consistent, global order of the event records that the connector emits to it. To ensure that the topic is not split among partitions, set the partition count for the topic by using one of the following methods: If you create the database schema history topic manually, specify a partition count of 1 . If you use the Apache Kafka broker to create the database schema history topic automatically, the topic is created, set the value of the Kafka num.partitions configuration option to 1 . Example: Message emitted to the Oracle connector schema change topic The following example shows a typical schema change message in JSON format. The message contains a logical representation of the table schema. { "schema": { ... }, "payload": { "source": { "version": "2.3.4.Final", "connector": "oracle", "name": "server1", "ts_ms": 1588252618953, "snapshot": "true", "db": "ORCLPDB1", "schema": "DEBEZIUM", "table": "CUSTOMERS", "txId" : null, "scn" : "1513734", "commit_scn": "1513754", "lcr_position" : null, "rs_id": "001234.00012345.0124", "ssn": 1, "redo_thread": 1, "user_name": "user" }, "ts_ms": 1588252618953, 1 "databaseName": "ORCLPDB1", 2 "schemaName": "DEBEZIUM", // "ddl": "CREATE TABLE \"DEBEZIUM\".\"CUSTOMERS\" \n ( \"ID\" NUMBER(9,0) NOT NULL ENABLE, \n \"FIRST_NAME\" VARCHAR2(255), \n \"LAST_NAME" VARCHAR2(255), \n \"EMAIL\" VARCHAR2(255), \n PRIMARY KEY (\"ID\") ENABLE, \n SUPPLEMENTAL LOG DATA (ALL) COLUMNS\n ) SEGMENT CREATION IMMEDIATE \n PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 \n NOCOMPRESS LOGGING\n STORAGE(INITIAL 65536 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645\n PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1\n BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)\n TABLESPACE \"USERS\" ", 3 "tableChanges": [ 4 { "type": "CREATE", 5 "id": "\"ORCLPDB1\".\"DEBEZIUM\".\"CUSTOMERS\"", 6 "table": { 7 "defaultCharsetName": null, "primaryKeyColumnNames": [ 8 "ID" ], "columns": [ 9 { "name": "ID", "jdbcType": 2, "nativeType": null, "typeName": "NUMBER", "typeExpression": "NUMBER", "charsetName": null, "length": 9, "scale": 0, "position": 1, "optional": false, "autoIncremented": false, "generated": false }, { "name": "FIRST_NAME", "jdbcType": 12, "nativeType": null, "typeName": "VARCHAR2", "typeExpression": "VARCHAR2", "charsetName": null, "length": 255, "scale": null, "position": 2, "optional": false, "autoIncremented": false, "generated": false }, { "name": "LAST_NAME", "jdbcType": 12, "nativeType": null, "typeName": "VARCHAR2", "typeExpression": "VARCHAR2", "charsetName": null, "length": 255, "scale": null, "position": 3, "optional": false, "autoIncremented": false, "generated": false }, { "name": "EMAIL", "jdbcType": 12, "nativeType": null, "typeName": "VARCHAR2", "typeExpression": "VARCHAR2", "charsetName": null, "length": 255, "scale": null, "position": 4, "optional": false, "autoIncremented": false, "generated": false } ], "attributes": [ 10 { "customAttribute": "attributeValue" } ] } } ] } } Table 7.7. Descriptions of fields in messages emitted to the schema change topic Item Field name Description 1 ts_ms Optional field that displays the time at which the connector processed the event. The time is based on the system clock in the JVM running the Kafka Connect task. In the source object, ts_ms indicates the time that the change was made in the database. By comparing the value for payload.source.ts_ms with the value for payload.ts_ms, you can determine the lag between the source database update and Debezium. 2 databaseName schemaName Identifies the database and the schema that contains the change. 3 ddl This field contains the DDL that is responsible for the schema change. 4 tableChanges An array of one or more items that contain the schema changes generated by a DDL command. 5 type Describes the kind of change. The type can be set to one of the following values: CREATE Table created. ALTER Table modified. DROP Table deleted. 6 id Full identifier of the table that was created, altered, or dropped. In the case of a table rename, this identifier is a concatenation of <old> , <new> table names. 7 table Represents table metadata after the applied change. 8 primaryKeyColumnNames List of columns that compose the table's primary key. 9 columns Metadata for each column in the changed table. 10 attributes Custom attribute metadata for each table change. In messages that the connector sends to the schema change topic, the message key is the name of the database that contains the schema change. In the following example, the payload field contains the databaseName key: { "schema": { "type": "struct", "fields": [ { "type": "string", "optional": false, "field": "databaseName" } ], "optional": false, "name": "io.debezium.connector.oracle.SchemaChangeKey" }, "payload": { "databaseName": "ORCLPDB1" } } 7.1.7. Debezium Oracle connector-generated events that represent transaction boundaries Debezium can generate events that represent transaction metadata boundaries and that enrich data change event messages. Limits on when Debezium receives transaction metadata Debezium registers and receives metadata only for transactions that occur after you deploy the connector. Metadata for transactions that occur before you deploy the connector is not available. Database transactions are represented by a statement block that is enclosed between the BEGIN and END keywords. Debezium generates transaction boundary events for the BEGIN and END delimiters in every transaction. Transaction boundary events contain the following fields: status BEGIN or END . id String representation of the unique transaction identifier. ts_ms The time of a transaction boundary event ( BEGIN or END event) at the data source. If the data source does not provide Debezium with the event time, then the field instead represents the time at which Debezium processes the event. event_count (for END events) Total number of events emmitted by the transaction. data_collections (for END events) An array of pairs of data_collection and event_count elements that indicates the number of events that the connector emits for changes that originate from a data collection. The following example shows a typical transaction boundary message: Example: Oracle connector transaction boundary event { "status": "BEGIN", "id": "5.6.641", "ts_ms": 1486500577125, "event_count": null, "data_collections": null } { "status": "END", "id": "5.6.641", "ts_ms": 1486500577691, "event_count": 2, "data_collections": [ { "data_collection": "ORCLPDB1.DEBEZIUM.CUSTOMER", "event_count": 1 }, { "data_collection": "ORCLPDB1.DEBEZIUM.ORDER", "event_count": 1 } ] } Unless overridden via the topic.transaction option, the connector emits transaction events to the <topic.prefix> .transaction topic. 7.1.7.1. How the Debezium Oracle connector enriches change event messages with transaction metadata When transaction metadata is enabled, the data message Envelope is enriched with a new transaction field. This field provides information about every event in the form of a composite of fields: id String representation of unique transaction identifier. total_order The absolute position of the event among all events generated by the transaction. data_collection_order The per-data collection position of the event among all events that were emitted by the transaction. The following example shows a typical transaction event message: { "before": null, "after": { "pk": "2", "aa": "1" }, "source": { ... }, "op": "c", "ts_ms": "1580390884335", "transaction": { "id": "5.6.641", "total_order": "1", "data_collection_order": "1" } } Query Modes The Debezium Oracle connector integrates with Oracle LogMiner by default. This integration requires a specialized set of steps which includes generating a complex JDBC SQL query to ingest the changes recorded in the transaction logs as change events. The VUSDLOGMNR_CONTENTS view used by the JDBC SQL query does not have any indices to improve the query's performance, and so there are different query modes that can be used that control how the SQL query is generated as a way to improve the query's execution. The log.mining.query.filter.mode connector property can be configured with one of the following to influence how the JDBC SQL query is generated: none (Default) This mode creates a JDBC query that only filters based on the different operation types, such as inserts, updates, or deletes, at the database level. When filtering the data based on the schema, table, or username include/exclude lists, this is done during the processing loop within the connector. This mode is often useful when capturing a small number of tables from a database that is not heavily saturated with changes. The generated query is quite simple, and focuses primarily on reading as quickly as possible with low database overhead. in This mode creates a JDBC query that filters not only operation types at the database level, but also schema, table, and username include/exclude lists. The query's predicates are generated using a SQL in-clause based on the values specified in the include/exclude list configuration properties. This mode is often useful when capturing a large number of tables from a database that is heavily saturated with changes. The generated query is much more complex than the none mode, and focuses on reducing network overhead and performing as much filtering at the database level as possible. Finally, do not specify regular expressions as part of schema and table include/exclude configuration properties. Using regular expressions will cause the connector to not match changes based on these configuration properties, causing changes to be missed. regex This mode creates a JDBC query that filters not only operation types at the database level, but also schema, table, and username include/exclude lists. However, unlike the in mode, this mode generates a SQL query using the Oracle REGEXP_LIKE operator using a conjunction or disjunction depending on whether include or excluded values are specified. This mode is often useful when capturing a variable number of tables that can be identified using a small number of regular expressions. The generated query is much more complex than any other mode, and focuses on reducing network overhead and performing as much filtering at the database level as possible. 7.1.8. How the Debezium Oracle connector uses event buffering Oracle writes all changes to the redo logs in the order in which they occur, including changes that are later discarded by a rollback. As a result, concurrent changes from separate transactions are intertwined. When the connector first reads the stream of changes, because it cannot immediately determine which changes are committed or rolled back, it temporarily stores the change events in an internal buffer. After a change is committed, the connector writes the change event from the buffer to Kafka. The connector drops change events that are discarded by a rollback. You can configure the buffering mechanism that the connector uses by setting the property log.mining.buffer.type . Heap The default buffer type is configured using memory . Under the default memory setting, the connector uses the heap memory of the JVM process to allocate and manage buffered event records. If you use the memory buffer setting, be sure that the amount of memory that you allocate to the Java process can accommodate long-running and large transactions in your environment. 7.1.9. How the Debezium Oracle connector detects gaps in SCN values When the Debezium Oracle connector is configured to use LogMiner, it collects change events from Oracle by using a start and end range that is based on system change numbers (SCNs). The connector manages this range automatically, increasing or decreasing the range depending on whether the connector is able to stream changes in near real-time, or must process a backlog of changes due to the volume of large or bulk transactions in the database. Under certain circumstances, the Oracle database advances the SCN by an unusually high amount, rather than increasing the SCN value at a constant rate. Such a jump in the SCN value can occur because of the way that a particular integration interacts with the database, or as a result of events such as hot backups. The Debezium Oracle connector relies on the following configuration properties to detect the SCN gap and adjust the mining range. log.mining.scn.gap.detection.gap.size.min Specifies the minimum gap size. log.mining.scn.gap.detection.time.interval.max.ms Specifies the maximum time interval. The connector first compares the difference in the number of changes between the current SCN and the highest SCN in the current mining range. If the difference between the current SCN value and the highest SCN value is greater than the minimum gap size, then the connector has potentially detected a SCN gap. To confirm whether a gap exists, the connector compares the timestamps of the current SCN and the SCN at the end of the mining range. If the difference between the timestamps is less than the maximum time interval, then the existence of an SCN gap is confirmed. When an SCN gap occurs, the Debezium connector automatically uses the current SCN as the end point for the range of the current mining session. This allows the connector to quickly catch up to the real-time events without mining smaller ranges in between that return no changes because the SCN value was increased by an unexpectedly large number. When the connector performs the preceding steps in response to an SCN gap, it ignores the value that is specified by the log.mining.batch.size.max property. After the connector finishes the mining session and catches back up to real-time events, it resumes enforcement of the maximum log mining batch size. Warning SCN gap detection is available only if the large SCN increment occurs while the connector is running and processing near real-time events. 7.1.10. How Debezium manages offsets in databases that change infrequently The Debezium Oracle connector tracks system change numbers in the connector offsets so that when the connector is restarted, it can begin where it left off. These offsets are part of each emitted change event; however, when the frequency of database changes are low (every few hours or days), the offsets can become stale and prevent the connector from successfully restarting if the system change number is no longer available in the transaction logs. For connectors that use non-CDB mode to connect to Oracle, you can enable heartbeat.interval.ms to force the connector to emit a heartbeat event at regular intervals so that offsets remain synchronized. For connectors that use CDB mode to connect to Oracle, maintaining synchronization is more complicated. Not only must you set heartbeat.interval.ms , but it's also necessary to set heartbeat.action.query . Specifying both properties is required, because in CDB mode, the connector specifically tracks changes inside the PDB only. A supplementary mechanism is needed to trigger change events from within the pluggable database. At regular intervals, the heartbeat action query causes the connector to insert a new table row, or update an existing row in the pluggable database. Debezium detects the table changes and emits change events for them, ensuring that offsets remain synchronized, even in pluggable databases that process changes infrequently. Note For the connector to use the heartbeat.action.query with tables that are not owned by the connector user account , you must grant the connector user permission to run the necessary INSERT or UPDATE queries on those tables. 7.2. Descriptions of Debezium Oracle connector data change events Every data change event that the Oracle connector emits has a key and a value. The structures of the key and value depend on the table from which the change events originate. For information about how Debezium constructs topic names, see Topic names . Warning The Debezium Oracle connector ensures that all Kafka Connect schema names are valid Avro schema names . This means that the logical server name must start with alphabetic characters or an underscore ([a-z,A-Z,_]), and the remaining characters in the logical server name and all characters in the schema and table names must be alphanumeric characters or an underscore ([a-z,A-Z,0-9,\_]). The connector automatically replaces invalid characters with an underscore character. Unexpected naming conflicts can result when the only distinguishing characters between multiple logical server names, schema names, or table names are not valid characters, and those characters are replaced with underscores. Debezium and Kafka Connect are designed around continuous streams of event messages . However, the structure of these events might change over time, which can be difficult for topic consumers to handle. To facilitate the processing of mutable event structures, each event in Kafka Connect is self-contained. Every message key and value has two parts: a schema and payload . The schema describes the structure of the payload, while the payload contains the actual data. Warning Changes that are performed by the SYS or SYSTEM user accounts are not captured by the connector. The following topics contain more details about data change events: Section 7.2.1, "About keys in Debezium Oracle connector change events" Section 7.2.2, "About values in Debezium Oracle connector change events" 7.2.1. About keys in Debezium Oracle connector change events For each changed table, the change event key is structured such that a field exists for each column in the primary key (or unique key constraint) of the table at the time when the event is created. For example, a customers table that is defined in the inventory database schema, might have the following change event key: CREATE TABLE customers ( id NUMBER(9) GENERATED BY DEFAULT ON NULL AS IDENTITY (START WITH 1001) NOT NULL PRIMARY KEY, first_name VARCHAR2(255) NOT NULL, last_name VARCHAR2(255) NOT NULL, email VARCHAR2(255) NOT NULL UNIQUE ); If the value of the <topic.prefix> .transaction configuration property is set to server1 , the JSON representation for every change event that occurs in the customers table in the database features the following key structure: { "schema": { "type": "struct", "fields": [ { "type": "int32", "optional": false, "field": "ID" } ], "optional": false, "name": "server1.INVENTORY.CUSTOMERS.Key" }, "payload": { "ID": 1004 } } The schema portion of the key contains a Kafka Connect schema that describes the content of the key portion. In the preceding example, the payload value is not optional, the structure is defined by a schema named server1.DEBEZIUM.CUSTOMERS.Key , and there is one required field named id of type int32 . The value of the key's payload field indicates that it is indeed a structure (which in JSON is just an object) with a single id field, whose value is 1004 . Therefore, you can interpret this key as describing the row in the inventory.customers table (output from the connector named server1 ) whose id primary key column had a value of 1004 . 7.2.2. About values in Debezium Oracle connector change events The structure of a value in a change event message mirrors the structure of the message key in the change event in the message, and contains both a schema section and a payload section. Payload of a change event value An envelope structure in the payload sections of a change event value contains the following fields: op A mandatory field that contains a string value describing the type of operation. The op field in the payload of an Oracle connector change event value contains one of the following values: c (create or insert), u (update), d (delete), or r (read, which indicates a snapshot). before An optional field that, if present, describes the state of the row before the event occurred. The structure is described by the server1.INVENTORY.CUSTOMERS.Value Kafka Connect schema, which the server1 connector uses for all rows in the inventory.customers table. after An optional field that, if present, contains the state of a row after a change occurs. The structure is described by the same server1.INVENTORY.CUSTOMERS.Value Kafka Connect schema that is used for the before field. source A mandatory field that contains a structure that describes the source metadata for the event. In the case of the Oracle connector, the structure includes the following fields: The Debezium version. The connector name. Whether the event is part of an ongoing snapshot or not. The transaction id (not includes for snapshots). The SCN of the change. A timestamp that indicates when the record in the source database changed (for snapshots, the timestamp indicates when the snapshot occurred). Username who made the change Tip The commit_scn field is optional and describes the SCN of the transaction commit that the change event participates within. ts_ms An optional field that, if present, contains the time (based on the system clock in the JVM that runs the Kafka Connect task) at which the connector processed the event. Schema of a change event value The schema portion of the event message's value contains a schema that describes the envelope structure of the payload and the nested fields within it. For more information about change event values, see the following topics: create events update events delete events truncate events create events The following example shows the value of a create event value from the customers table that is described in the change event keys example: { "schema": { "type": "struct", "fields": [ { "type": "struct", "fields": [ { "type": "int32", "optional": false, "field": "ID" }, { "type": "string", "optional": false, "field": "FIRST_NAME" }, { "type": "string", "optional": false, "field": "LAST_NAME" }, { "type": "string", "optional": false, "field": "EMAIL" } ], "optional": true, "name": "server1.DEBEZIUM.CUSTOMERS.Value", "field": "before" }, { "type": "struct", "fields": [ { "type": "int32", "optional": false, "field": "ID" }, { "type": "string", "optional": false, "field": "FIRST_NAME" }, { "type": "string", "optional": false, "field": "LAST_NAME" }, { "type": "string", "optional": false, "field": "EMAIL" } ], "optional": true, "name": "server1.DEBEZIUM.CUSTOMERS.Value", "field": "after" }, { "type": "struct", "fields": [ { "type": "string", "optional": true, "field": "version" }, { "type": "string", "optional": false, "field": "name" }, { "type": "int64", "optional": true, "field": "ts_ms" }, { "type": "string", "optional": true, "field": "txId" }, { "type": "string", "optional": true, "field": "scn" }, { "type": "string", "optional": true, "field": "commit_scn" }, { "type": "string", "optional": true, "field": "rs_id" }, { "type": "int64", "optional": true, "field": "ssn" }, { "type": "int32", "optional": true, "field": "redo_thread" }, { "type": "string", "optional": true, "field": "user_name" }, { "type": "boolean", "optional": true, "field": "snapshot" } ], "optional": false, "name": "io.debezium.connector.oracle.Source", "field": "source" }, { "type": "string", "optional": false, "field": "op" }, { "type": "int64", "optional": true, "field": "ts_ms" } ], "optional": false, "name": "server1.DEBEZIUM.CUSTOMERS.Envelope" }, "payload": { "before": null, "after": { "ID": 1004, "FIRST_NAME": "Anne", "LAST_NAME": "Kretchmar", "EMAIL": "[email protected]" }, "source": { "version": "2.3.4.Final", "name": "server1", "ts_ms": 1520085154000, "txId": "6.28.807", "scn": "2122185", "commit_scn": "2122185", "rs_id": "001234.00012345.0124", "ssn": 1, "redo_thread": 1, "user_name": "user", "snapshot": false }, "op": "c", "ts_ms": 1532592105975 } } In the preceding example, notice how the event defines the following schema: The envelope ( server1.DEBEZIUM.CUSTOMERS.Envelope ). The source structure ( io.debezium.connector.oracle.Source , which is specific to the Oracle connector and reused across all events). The table-specific schemas for the before and after fields. Tip The names of the schemas for the before and after fields are of the form <logicalName> . <schemaName> . <tableName> .Value , and thus are entirely independent from the schemas for all other tables. As a result, when you use the Avro converter , the Avro schemas for tables in each logical source have their own evolution and history. The payload portion of this event's value , provides information about the event. It describes that a row was created ( op=c ), and shows that the after field value contains the values that were inserted into the ID , FIRST_NAME , LAST_NAME , and EMAIL columns of the row. Tip By default, the JSON representations of events are much larger than the rows that they describe. The larger size is due to the JSON representation including both the schema and payload portions of a message. You can use the Avro Converter to decrease the size of messages that the connector writes to Kafka topics. update events The following example shows an update change event that the connector captures from the same table as the preceding create event. { "schema": { ... }, "payload": { "before": { "ID": 1004, "FIRST_NAME": "Anne", "LAST_NAME": "Kretchmar", "EMAIL": "[email protected]" }, "after": { "ID": 1004, "FIRST_NAME": "Anne", "LAST_NAME": "Kretchmar", "EMAIL": "[email protected]" }, "source": { "version": "2.3.4.Final", "name": "server1", "ts_ms": 1520085811000, "txId": "6.9.809", "scn": "2125544", "commit_scn": "2125544", "rs_id": "001234.00012345.0124", "ssn": 1, "redo_thread": 1, "user_name": "user", "snapshot": false }, "op": "u", "ts_ms": 1532592713485 } } The payload has the same structure as the payload of a create (insert) event, but the following values are different: The value of the op field is u , signifying that this row changed because of an update. The before field shows the former state of the row with the values that were present before the update database commit. The after field shows the updated state of the row, with the EMAIL value now set to [email protected] . The structure of the source field includes the same fields as before, but the values are different, because the connector captured the event from a different position in the redo log. The ts_ms field shows the timestamp that indicates when Debezium processed the event. The payload section reveals several other useful pieces of information. For example, by comparing the before and after structures, we can determine how a row changed as the result of a commit. The source structure provides information about Oracle's record of this change, providing traceability. It also gives us insight into when this event occurred in relation to other events in this topic and in other topics. Did it occur before, after, or as part of the same commit as another event? Note When the columns for a row's primary/unique key are updated, the value of the row's key changes. As a result, Debezium emits three events after such an update: A DELETE event. A tombstone event with the old key for the row. An INSERT event that provides the new key for the row. delete events The following example shows a delete event for the table that is shown in the preceding create and update event examples. The schema portion of the delete event is identical to the schema portion for those events. { "schema": { ... }, "payload": { "before": { "ID": 1004, "FIRST_NAME": "Anne", "LAST_NAME": "Kretchmar", "EMAIL": "[email protected]" }, "after": null, "source": { "version": "2.3.4.Final", "name": "server1", "ts_ms": 1520085153000, "txId": "6.28.807", "scn": "2122184", "commit_scn": "2122184", "rs_id": "001234.00012345.0124", "ssn": 1, "redo_thread": 1, "user_name": "user", "snapshot": false }, "op": "d", "ts_ms": 1532592105960 } } The payload portion of the event reveals several differences when compared to the payload of a create or update event: The value of the op field is d , signifying that the row was deleted. The before field shows the former state of the row that was deleted with the database commit. The value of the after field is null , signifying that the row no longer exists. The structure of the source field includes many of the keys that exist in create or update events, but the values in the ts_ms , scn , and txId fields are different. The ts_ms shows a timestamp that indicates when Debezium processed this event. The delete event provides consumers with the information that they require to process the removal of this row. The Oracle connector's events are designed to work with Kafka log compaction , which allows for the removal of some older messages as long as at least the most recent message for every key is kept. This allows Kafka to reclaim storage space while ensuring the topic contains a complete dataset and can be used for reloading key-based state. When a row is deleted, the delete event value shown in the preceding example still works with log compaction, because Kafka is able to remove all earlier messages that use the same key. The message value must be set to null to instruct Kafka to remove all messages that share the same key. To make this possible, by default, Debezium's Oracle connector always follows a delete event with a special tombstone event that has the same key but null value. You can change the default behavior by setting the connector property tombstones.on.delete . truncate events A truncate change event signals that a table has been truncated. The message key is null in this case, the message value looks like this: { "schema": { ... }, "payload": { "before": null, "after": null, "source": { 1 "version": "2.3.4.Final", "connector": "oracle", "name": "oracle_server", "ts_ms": 1638974535000, "snapshot": "false", "db": "ORCLPDB1", "sequence": null, "schema": "DEBEZIUM", "table": "TEST_TABLE", "txId": "02000a0037030000", "scn": "13234397", "commit_scn": "13271102", "lcr_position": null, "rs_id": "001234.00012345.0124", "ssn": 1, "redo_thread": 1, "user_name": "user" }, "op": "t", 2 "ts_ms": 1638974558961, 3 "transaction": null } } Table 7.8. Descriptions of truncate event value fields Item Field name Description 1 source Mandatory field that describes the source metadata for the event. In a truncate event value, the source field structure is the same as for create , update , and delete events for the same table, provides this metadata: Debezium version Connector type and name Database and table that contains the new row Schema name If the event was part of a snapshot (always false for truncate events) ID of the transaction in which the operation was performed SCN of the operation Timestamp for when the change was made in the database Username who performed the change 2 op Mandatory string that describes the type of operation. The op field value is t , signifying that this table was truncated. 3 ts_ms Optional field that displays the time at which the connector processed the event. The time is based on the system clock in the JVM running the Kafka Connect task. In the source object, ts_ms indicates the time that the change was made in the database. By comparing the value for payload.source.ts_ms with the value for payload.ts_ms , you can determine the lag between the source database update and Debezium. Because truncate events represent changes made to an entire table, and have no message key, in topics with multiple partitions, there is no guarantee that consumers receive truncate events and change events ( create , update , etc.) for to a table in order. For example, when a consumer reads events from different partitions, it might receive an update event for a table after it receives a truncate event for the same table. Ordering can be guaranteed only if a topic uses a single partition. If you do not want to capture truncate events, use the skipped.operations option to filter them out. 7.3. How Debezium Oracle connectors map data types When the Debezium Oracle connector detects a change in the value of a table row, it emits a change event that represents the change. Each change event record is structured in the same way as the original table, with the event record containing a field for each column value. The data type of a table column determines how the connector represents the column's values in change event fields, as shown in the tables in the following sections. For each column in a table, Debezium maps the source data type to a literal type and, and in some cases, a semantic type , in the corresponding event field. Literal types Describe how the value is literally represented, using one of the following Kafka Connect schema types: INT8 , INT16 , INT32 , INT64 , FLOAT32 , FLOAT64 , BOOLEAN , STRING , BYTES , ARRAY , MAP , and STRUCT . Semantic types Describe how the Kafka Connect schema captures the meaning of the field, by using the name of the Kafka Connect schema for the field. If the default data type conversions do not meet your needs, you can create a custom converter for the connector. For some Oracle large object (CLOB, NCLOB, and BLOB) and numeric data types, you can manipulate the way that the connector performs the type mapping by changing default configuration property settings. For more information about how Debezium properties control mappings for these data types, see Binary and Character LOB types and Numeric types . For more information about how the Debezium connector maps Oracle data types, see the following topics: Character types Binary and Character LOB types Numeric types Boolean types Temporal types ROWID types User-defined types Oracle-supplied types Default Values Character types The following table describes how the connector maps basic character types. Table 7.9. Mappings for Oracle basic character types Oracle Data Type Literal type (schema type) Semantic type (schema name) and Notes CHAR[(M)] STRING n/a NCHAR[(M)] STRING n/a NVARCHAR2[(M)] STRING n/a VARCHAR[(M)] STRING n/a VARCHAR2[(M)] STRING n/a Binary and Character LOB types Use of the BLOB , CLOB , and NCLOB with the Debezium Oracle connector is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview . The following table describes how the connector maps binary and character large object (LOB) data types. Table 7.10. Mappings for Oracle binary and character LOB types Oracle Data Type Literal type (schema type) Semantic type (schema name) and Notes BFILE n/a This data type is not supported BLOB BYTES Either the raw bytes (the default), a base64-encoded String, or a base64-url-safe-encoded String, or a hex-encoded String, based on the binary.handling.mode connector configuration property setting. CLOB STRING n/a LONG n/a This data type is not supported. LONG RAW n/a This data type is not supported. NCLOB STRING n/a RAW n/a This data type is not supported. Note Oracle only supplies column values for CLOB , NCLOB , and BLOB data types if they're explicitly set or changed in a SQL statement. As a result, change events never contain the value of an unchanged CLOB , NCLOB , or BLOB column. Instead, they contain placeholders as defined by the connector property, unavailable.value.placeholder . If the value of a CLOB , NCLOB , or BLOB column is updated, the new value is placed in the after element of the corresponding update change event. The before element contains the unavailable value placeholder. Numeric types The following table describes how the Debezium Oracle connector maps numeric types. Note You can modify the way that the connector maps the Oracle DECIMAL , NUMBER , NUMERIC , and REAL data types by changing the value of the connector's decimal.handling.mode configuration property. When the property is set to its default value of precise , the connector maps these Oracle data types to the Kafka Connect org.apache.kafka.connect.data.Decimal logical type, as indicated in the table. When the value of the property is set to double or string , the connector uses alternate mappings for some Oracle data types. For more information, see the Semantic type and Notes column in the following table. Table 7.11. Mappings for Oracle numeric data types Oracle Data Type Literal type (schema type) Semantic type (schema name) and Notes BINARY_FLOAT FLOAT32 n/a BINARY_DOUBLE FLOAT64 n/a DECIMAL[(P, S)] BYTES / INT8 / INT16 / INT32 / INT64 org.apache.kafka.connect.data.Decimal if using BYTES Handled equivalently to NUMBER (note that S defaults to 0 for DECIMAL ). When the decimal.handling.mode property is set to double , the connector represents DECIMAL values as Java double values with schema type FLOAT64 . When the decimal.handling.mode property is set to string , the connector represents DECIMAL values as their formatted string representation with schema type STRING . DOUBLE PRECISION STRUCT io.debezium.data.VariableScaleDecimal Contains a structure with two fields: scale of type INT32 that contains the scale of the transferred value and value of type BYTES containing the original value in an unscaled form. FLOAT[(P)] STRUCT io.debezium.data.VariableScaleDecimal Contains a structure with two fields: scale of type INT32 that contains the scale of the transferred value and value of type BYTES containing the original value in an unscaled form. INTEGER , INT BYTES org.apache.kafka.connect.data.Decimal INTEGER is mapped in Oracle to NUMBER(38,0) and hence can hold values larger than any of the INT types could store NUMBER[(P[, *])] STRUCT io.debezium.data.VariableScaleDecimal Contains a structure with two fields: scale of type INT32 that contains the scale of the transferred value and value of type BYTES containing the original value in an unscaled form. When the decimal.handling.mode property is set to double , the connector represents NUMBER values as Java double values with schema type FLOAT64 . When the decimal.handling.mode property is set to string , the connector represents NUMBER values as their formatted string representation with schema type STRING . NUMBER(P, S <= 0) INT8 / INT16 / INT32 / INT64 NUMBER columns with a scale of 0 represent integer numbers. A negative scale indicates rounding in Oracle, for example, a scale of -2 causes rounding to hundreds. Depending on the precision and scale, one of the following matching Kafka Connect integer type is chosen: P - S < 3, INT8 P - S < 5, INT16 P - S < 10, INT32 P - S < 19, INT64 P - S >= 19, BYTES ( org.apache.kafka.connect.data.Decimal ) When the decimal.handling.mode property is set to double , the connector represents NUMBER values as Java double values with schema type FLOAT64 . When the decimal.handling.mode property is set to string , the connector represents NUMBER values as their formatted string representation with schema type STRING . NUMBER(P, S > 0) BYTES org.apache.kafka.connect.data.Decimal NUMERIC[(P, S)] BYTES / INT8 / INT16 / INT32 / INT64 org.apache.kafka.connect.data.Decimal if using BYTES Handled equivalently to NUMBER (note that S defaults to 0 for NUMERIC ). When the decimal.handling.mode property is set to double , the connector represents NUMERIC values as Java double values with schema type FLOAT64 . When the decimal.handling.mode property is set to string , the connector represents NUMERIC values as their formatted string representation with schema type STRING . SMALLINT BYTES org.apache.kafka.connect.data.Decimal SMALLINT is mapped in Oracle to NUMBER(38,0) and hence can hold values larger than any of the INT types could store REAL STRUCT io.debezium.data.VariableScaleDecimal Contains a structure with two fields: scale of type INT32 that contains the scale of the transferred value and value of type BYTES containing the original value in an unscaled form. When the decimal.handling.mode property is set to double , the connector represents REAL values as Java double values with schema type FLOAT64 . When the decimal.handling.mode property is set to string , the connector represents REAL values as their formatted string representation with schema type STRING . As mention above, Oracle allows negative scales in NUMBER type. This can cause an issue during conversion to the Avro format when the number is represented as the Decimal . Decimal type includes scale information, but Avro specification allows only positive values for the scale. Depending on the schema registry used, it may result into Avro serialization failure. To avoid this issue, you can use NumberToZeroScaleConverter , which converts sufficiently high numbers (P - S >= 19) with negative scale into Decimal type with zero scale. It can be configured as follows: By default, the number is converted to Decimal type ( zero_scale.decimal.mode=precise ), but for completeness remaining two supported types ( double and string ) are supported as well. Boolean types Oracle does not provide native support for a BOOLEAN data type. However, it is common practice to use other data types with certain semantics to simulate the concept of a logical BOOLEAN data type. To enable you to convert source columns to Boolean data types, Debezium provides a NumberOneToBooleanConverter custom converter that you can use in one of the following ways: Map all NUMBER(1) columns to a BOOLEAN type. Enumerate a subset of columns by using a comma-separated list of regular expressions. To use this type of conversion, you must set the converters configuration property with the selector parameter, as shown in the following example: Temporal types Other than the Oracle INTERVAL , TIMESTAMP WITH TIME ZONE , and TIMESTAMP WITH LOCAL TIME ZONE data types, the way that the connector converts temporal types depends on the value of the time.precision.mode configuration property. When the time.precision.mode configuration property is set to adaptive (the default), then the connector determines the literal and semantic type for the temporal types based on the column's data type definition so that events exactly represent the values in the database: Oracle data type Literal type (schema type) Semantic type (schema name) and Notes DATE INT64 io.debezium.time.Timestamp Represents the number of milliseconds since the UNIX epoch, and does not include timezone information. INTERVAL DAY[(M)] TO SECOND FLOAT64 io.debezium.time.MicroDuration The number of micro seconds for a time interval using the 365.25 / 12.0 formula for days per month average. io.debezium.time.Interval (when interval.handling.mode is set to string ) The string representation of the interval value that follows the pattern P<years>Y<months>M<days>DT<hours>H<minutes>M<seconds>S , for example, P1Y2M3DT4H5M6.78S . INTERVAL YEAR[(M)] TO MONTH FLOAT64 io.debezium.time.MicroDuration The number of micro seconds for a time interval using the 365.25 / 12.0 formula for days per month average. io.debezium.time.Interval (when interval.handling.mode is set to string ) The string representation of the interval value that follows the pattern P<years>Y<months>M<days>DT<hours>H<minutes>M<seconds>S , for example, P1Y2M3DT4H5M6.78S . TIMESTAMP(0 - 3) INT64 io.debezium.time.Timestamp Represents the number of milliseconds since the UNIX epoch, and does not include timezone information. TIMESTAMP, TIMESTAMP(4 - 6) INT64 io.debezium.time.MicroTimestamp Represents the number of microseconds since the UNIX epoch, and does not include timezone information. TIMESTAMP(7 - 9) INT64 io.debezium.time.NanoTimestamp Represents the number of nanoseconds since the UNIX epoch, and does not include timezone information. TIMESTAMP WITH TIME ZONE STRING io.debezium.time.ZonedTimestamp A string representation of a timestamp with timezone information. TIMESTAMP WITH LOCAL TIME ZONE STRING io.debezium.time.ZonedTimestamp A string representation of a timestamp in UTC. When the time.precision.mode configuration property is set to connect , then the connector uses the predefined Kafka Connect logical types. This can be useful when consumers only know about the built-in Kafka Connect logical types and are unable to handle variable-precision time values. Because the level of precision that Oracle supports exceeds the level that the logical types in Kafka Connect support, if you set time.precision.mode to connect , a loss of precision results when the fractional second precision value of a database column is greater than 3: Oracle data type Literal type (schema type) Semantic type (schema name) and Notes DATE INT32 org.apache.kafka.connect.data.Date Represents the number of days since the UNIX epoch. INTERVAL DAY[(M)] TO SECOND FLOAT64 io.debezium.time.MicroDuration The number of micro seconds for a time interval using the 365.25 / 12.0 formula for days per month average. io.debezium.time.Interval (when interval.handling.mode is set to string ) The string representation of the interval value that follows the pattern P<years>Y<months>M<days>DT<hours>H<minutes>M<seconds>S , for example, P1Y2M3DT4H5M6.78S . INTERVAL YEAR[(M)] TO MONTH FLOAT64 io.debezium.time.MicroDuration The number of micro seconds for a time interval using the 365.25 / 12.0 formula for days per month average. io.debezium.time.Interval (when interval.handling.mode is set to string ) The string representation of the interval value that follows the pattern P<years>Y<months>M<days>DT<hours>H<minutes>M<seconds>S , for example, P1Y2M3DT4H5M6.78S . TIMESTAMP(0 - 3) INT64 org.apache.kafka.connect.data.Timestamp Represents the number of milliseconds since the UNIX epoch, and does not include timezone information. TIMESTAMP(4 - 6) INT64 org.apache.kafka.connect.data.Timestamp Represents the number of milliseconds since the UNIX epoch, and does not include timezone information. TIMESTAMP(7 - 9) INT64 org.apache.kafka.connect.data.Timestamp Represents the number of milliseconds since the UNIX epoch, and does not include timezone information. TIMESTAMP WITH TIME ZONE STRING io.debezium.time.ZonedTimestamp A string representation of a timestamp with timezone information. TIMESTAMP WITH LOCAL TIME ZONE STRING io.debezium.time.ZonedTimestamp A string representation of a timestamp in UTC. ROWID types The following table describes how the connector maps ROWID (row address) data types. Table 7.12. Mappings for Oracle ROWID data types Oracle Data Type Literal type (schema type) Semantic type (schema name) and Notes ROWID STRING n/a UROWID n/a This data type is not supported . User-defined types Oracle enables you to define custom data types to provide flexibility when the built-in data types do not satisfy your requirements. There are a several user-defined types such as Object types, REF data types, Varrays, and Nested Tables. At this time, you cannot use the Debezium Oracle connector with any of these user-defined types. Oracle-supplied types Oracle provides SQL-based interfaces that you can use to define new types when the built-in or ANSI-supported types are insufficient. Oracle offers several commonly used data types to serve a broad array of purposes such as Any , XML , or Spatial types. At this time, you cannot use the Debezium Oracle connector with any of these data types. Default Values If a default value is specified for a column in the database schema, the Oracle connector will attempt to propagate this value to the schema of the corresponding Kafka record field. Most common data types are supported, including: Character types ( CHAR , NCHAR , VARCHAR , VARCHAR2 , NVARCHAR , NVARCHAR2 ) Numeric types ( INTEGER , NUMERIC , etc.) Temporal types ( DATE , TIMESTAMP , INTERVAL , etc.) If a temporal type uses a function call such as TO_TIMESTAMP or TO_DATE to represent the default value, the connector will resolve the default value by making an additional database call to evaluate the function. For example, if a DATE column is defined with the default value of TO_DATE('2021-01-02', 'YYYY-MM-DD') , the column's default value will be the number of days since the UNIX epoch for that date or 18629 in this case. If a temporal type uses the SYSDATE constant to represent the default value, the connector will resolve this based on whether the column is defined as NOT NULL or NULL . If the column is nullable, no default value will be set; however, if the column isn't nullable then the default value will be resolved as either 0 (for DATE or TIMESTAMP(n) data types) or 1970-01-01T00:00:00Z (for TIMESTAMP WITH TIME ZONE or TIMESTAMP WITH LOCAL TIME ZONE data types). The default value type will be numeric except if the column is a TIMESTAMP WITH TIME ZONE or TIMESTAMP WITH LOCAL TIME ZONE in which case its emitted as a string. 7.4. Setting up Oracle to work with Debezium The following steps are necessary to set up Oracle for use with the Debezium Oracle connector. These steps assume the use of the multi-tenancy configuration with a container database and at least one pluggable database. If you do not intend to use a multi-tenant configuration, it might be necessary to adjust the following steps. For details about setting up Oracle for use with the Debezium connector, see the following sections: Section 7.4.1, "Compatibility of the Debezium Oracle connector with Oracle installation types" Section 7.4.2, "Schemas that the Debezium Oracle connector excludes when capturing change events" Section 7.4.4, "Preparing Oracle databases for use with Debezium" Section 7.4.5, "Resizing Oracle redo logs to accommodate the data dictionary" Section 7.4.6, "Creating an Oracle user for the Debezium Oracle connector" Section 7.4.7, "Support for Oracle standby databases" 7.4.1. Compatibility of the Debezium Oracle connector with Oracle installation types An Oracle database can be installed either as a standalone instance or using Oracle Real Application Cluster (RAC). The Debezium Oracle connector is compatible with both types of installation. 7.4.2. Schemas that the Debezium Oracle connector excludes when capturing change events When the Debezium Oracle connector captures tables, it automatically excludes tables from the following schemas: appqossys audsys ctxsys dvsys dbsfwuser dbsnmp qsmadmin_internal lbacsys mdsys ojvmsys olapsys orddata ordsys outln sys system wmsys xdb To enable the connector to capture changes from a table, the table must use a schema that is not named in the preceding list. 7.4.3. Tables that the Debezium Oracle connector excludes when capturing change events When the Debezium Oracle connector captures tables, it automatically excludes tables that match the following rules: Compression Advisor tables matching the pattern CMP[3|4]USD[0-9]+ . Index-organized tables matching the pattern SYS_IOT_OVER_% . Spatial tables matching the patterns MDRT_% , MDRS_% , or MDXT_% . Nested tables To enable the connector to capture a table with a name that matches any of the preceding rules, you must rename the table. 7.4.4. Preparing Oracle databases for use with Debezium Configuration needed for Oracle LogMiner Oracle AWS RDS does not allow you to execute the commands above nor does it allow you to log in as sysdba. AWS provides these alternative commands to configure LogMiner. Before executing these commands, ensure that your Oracle AWS RDS instance is enabled for backups. To confirm that Oracle has backups enabled, execute the command below first. The LOG_MODE should say ARCHIVELOG. If it does not, you may need to reboot your Oracle AWS RDS instance. Configuration needed for Oracle AWS RDS LogMiner Once LOG_MODE is set to ARCHIVELOG, execute the commands to complete LogMiner configuration. The first command set the database to archivelogs and the second adds supplemental logging. Configuration needed for Oracle AWS RDS LogMiner To enable Debezium to capture the before state of changed database rows, you must also enable supplemental logging for captured tables or for the entire database. The following example illustrates how to configure supplemental logging for all columns in a single inventory.customers table. Enabling supplemental logging for all table columns increases the volume of the Oracle redo logs. To prevent excessive growth in the size of the logs, apply the preceding configuration selectively. Minimal supplemental logging must be enabled at the database level and can be configured as follows. 7.4.5. Resizing Oracle redo logs to accommodate the data dictionary Depending on the database configuration, the size and number of redo logs might not be sufficient to achieve acceptable performance. Before you set up the Debezium Oracle connector, ensure that the capacity of the redo logs is sufficient to support the database. The capacity of the redo logs for a database must be sufficient to store its data dictionary. In general, the size of the data dictionary increases with the number of tables and columns in the database. If the redo log lacks sufficient capacity, both the database and the Debezium connector might experience performance problems. Consult with your database administrator to evaluate whether the database might require increased log capacity. 7.4.6. Creating an Oracle user for the Debezium Oracle connector For the Debezium Oracle connector to capture change events, it must run as an Oracle LogMiner user that has specific permissions. The following example shows the SQL for creating an Oracle user account for the connector in a multi-tenant database model. Warning The connector captures database changes that are made by its own Oracle user account. However, it does not capture changes that are made by the SYS or SYSTEM user accounts. Creating the connector's LogMiner user sqlplus sys/top_secret@//localhost:1521/ORCLCDB as sysdba CREATE TABLESPACE logminer_tbs DATAFILE '/opt/oracle/oradata/ORCLCDB/logminer_tbs.dbf' SIZE 25M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED; exit; sqlplus sys/top_secret@//localhost:1521/ORCLPDB1 as sysdba CREATE TABLESPACE logminer_tbs DATAFILE '/opt/oracle/oradata/ORCLCDB/ORCLPDB1/logminer_tbs.dbf' SIZE 25M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED; exit; sqlplus sys/top_secret@//localhost:1521/ORCLCDB as sysdba CREATE USER c##dbzuser IDENTIFIED BY dbz DEFAULT TABLESPACE logminer_tbs QUOTA UNLIMITED ON logminer_tbs CONTAINER=ALL; GRANT CREATE SESSION TO c##dbzuser CONTAINER=ALL; 1 GRANT SET CONTAINER TO c##dbzuser CONTAINER=ALL; 2 GRANT SELECT ON V_USDDATABASE to c##dbzuser CONTAINER=ALL; 3 GRANT FLASHBACK ANY TABLE TO c##dbzuser CONTAINER=ALL; 4 GRANT SELECT ANY TABLE TO c##dbzuser CONTAINER=ALL; 5 GRANT SELECT_CATALOG_ROLE TO c##dbzuser CONTAINER=ALL; 6 GRANT EXECUTE_CATALOG_ROLE TO c##dbzuser CONTAINER=ALL; 7 GRANT SELECT ANY TRANSACTION TO c##dbzuser CONTAINER=ALL; 8 GRANT LOGMINING TO c##dbzuser CONTAINER=ALL; 9 GRANT CREATE TABLE TO c##dbzuser CONTAINER=ALL; 10 GRANT LOCK ANY TABLE TO c##dbzuser CONTAINER=ALL; 11 GRANT CREATE SEQUENCE TO c##dbzuser CONTAINER=ALL; 12 GRANT EXECUTE ON DBMS_LOGMNR TO c##dbzuser CONTAINER=ALL; 13 GRANT EXECUTE ON DBMS_LOGMNR_D TO c##dbzuser CONTAINER=ALL; 14 GRANT SELECT ON V_USDLOG TO c##dbzuser CONTAINER=ALL; 15 GRANT SELECT ON V_USDLOG_HISTORY TO c##dbzuser CONTAINER=ALL; 16 GRANT SELECT ON V_USDLOGMNR_LOGS TO c##dbzuser CONTAINER=ALL; 17 GRANT SELECT ON V_USDLOGMNR_CONTENTS TO c##dbzuser CONTAINER=ALL; 18 GRANT SELECT ON V_USDLOGMNR_PARAMETERS TO c##dbzuser CONTAINER=ALL; 19 GRANT SELECT ON V_USDLOGFILE TO c##dbzuser CONTAINER=ALL; 20 GRANT SELECT ON V_USDARCHIVED_LOG TO c##dbzuser CONTAINER=ALL; 21 GRANT SELECT ON V_USDARCHIVE_DEST_STATUS TO c##dbzuser CONTAINER=ALL; 22 GRANT SELECT ON V_USDTRANSACTION TO c##dbzuser CONTAINER=ALL; 23 GRANT SELECT ON V_USDMYSTAT TO c##dbzuser CONTAINER=ALL; 24 GRANT SELECT ON V_USDSTATNAME TO c##dbzuser CONTAINER=ALL; 25 exit; Table 7.13. Descriptions of permissions / grants Item Role name Description 1 CREATE SESSION Enables the connector to connect to Oracle. 2 SET CONTAINER Enables the connector to switch between pluggable databases. This is only required when the Oracle installation has container database support (CDB) enabled. 3 SELECT ON V_USDDATABASE Enables the connector to read the VUSDDATABASE table. 4 FLASHBACK ANY TABLE Enables the connector to perform Flashback queries, which is how the connector performs the initial snapshot of data. 5 SELECT ANY TABLE Enables the connector to read any table. 6 SELECT_CATALOG_ROLE Enables the connector to read the data dictionary, which is needed by Oracle LogMiner sessions. 7 EXECUTE_CATALOG_ROLE Enables the connector to write the data dictionary into the Oracle redo logs, which is needed to track schema changes. 8 SELECT ANY TRANSACTION Enables the snapshot process to perform a Flashback snapshot query against any transaction. When FLASHBACK ANY TABLE is granted, this should also be granted. 9 LOGMINING This role was added in newer versions of Oracle as a way to grant full access to Oracle LogMiner and its packages. On older versions of Oracle that don't have this role, you can ignore this grant. 10 CREATE TABLE Enables the connector to create its flush table in its default tablespace. The flush table allows the connector to explicitly control flushing of the LGWR internal buffers to disk. 11 LOCK ANY TABLE Enables the connector to lock tables during schema snapshot. If snapshot locks are explicitly disabled via configuration, this grant can be safely ignored. 12 CREATE SEQUENCE Enables the connector to create a sequence in its default tablespace. 13 EXECUTE ON DBMS_LOGMNR Enables the connector to run methods in the DBMS_LOGMNR package. This is required to interact with Oracle LogMiner. On newer versions of Oracle this is granted via the LOGMINING role but on older versions, this must be explicitly granted. 14 EXECUTE ON DBMS_LOGMNR_D Enables the connector to run methods in the DBMS_LOGMNR_D package. This is required to interact with Oracle LogMiner. On newer versions of Oracle this is granted via the LOGMINING role but on older versions, this must be explicitly granted. 15 to 25 SELECT ON V_USD... . Enables the connector to read these tables. The connector must be able to read information about the Oracle redo and archive logs, and the current transaction state, to prepare the Oracle LogMiner session. Without these grants, the connector cannot operate. 7.4.7. Support for Oracle standby databases Important The ability for the Debezium Oracle connector to ingest changes from a read-only logical standby database is a Developer Preview feature. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview software for production or business-critical workloads. Developer Preview software provides early access to upcoming product software in advance of its possible inclusion in a Red Hat product offering. Customers can use this software to test functionality and provide feedback during the development process. This software might not have any documentation, is subject to change or removal at any time, and has received limited testing. Red Hat might provide ways to submit feedback on Developer Preview software without an associated SLA. For more information about the support scope of Red Hat Developer Preview software, see Developer Preview Support Scope . 7.5. Deployment of Debezium Oracle connectors You can use either of the following methods to deploy a Debezium Oracle connector: Use AMQ Streams to automatically create an image that includes the connector plug-in . This is the preferred method. Build a custom Kafka Connect container image from a Dockerfile . Important Due to licensing requirements, the Debezium Oracle connector archive does not include the Oracle JDBC driver that the connector requires to connect to an Oracle database. To enable the connector to access the database, you must add the driver to your connector environment. For more information, see Obtaining the Oracle JDBC driver . Additional resources Section 7.6, "Descriptions of Debezium Oracle connector configuration properties" 7.5.1. Obtaining the Oracle JDBC driver Due to licensing requirements, the Oracle JDBC driver file that Debezium requires to connect to an Oracle database is not included in the Debezium Oracle connector archive. The driver is available for download from Maven Central. Depending on the deployment method that you use, you retrieve the driver by adding a command to the Kafka Connect custom resource or to the Dockerfile that you use to build the connector image. If you use AMQ Streams to add the connector to your Kafka Connect image, add the Maven Central location for the driver to builds.plugins.artifact.url in the KafkaConnect custom resource as shown in Section 7.5.3, "Using AMQ Streams to deploy a Debezium Oracle connector" . If you use a Dockerfile to build a container image for the connector, insert a curl command in the Dockerfile to specify the URL for downloading the required driver file from Maven Central. For more information, see Deploying a Debezium Oracle connector by building a custom Kafka Connect container image from a Dockerfile . 7.5.2. Debezium Oracle connector deployment using AMQ Streams Beginning with Debezium 1.7, the preferred method for deploying a Debezium connector is to use AMQ Streams to build a Kafka Connect container image that includes the connector plug-in. During the deployment process, you create and use the following custom resources (CRs): A KafkaConnect CR that defines your Kafka Connect instance and includes information about the connector artifacts needs to include in the image. A KafkaConnector CR that provides details that include information the connector uses to access the source database. After AMQ Streams starts the Kafka Connect pod, you start the connector by applying the KafkaConnector CR. In the build specification for the Kafka Connect image, you can specify the connectors that are available to deploy. For each connector plug-in, you can also specify other components that you want to make available for deployment. For example, you can add Service Registry artifacts, or the Debezium scripting component. When AMQ Streams builds the Kafka Connect image, it downloads the specified artifacts, and incorporates them into the image. The spec.build.output parameter in the KafkaConnect CR specifies where to store the resulting Kafka Connect container image. Container images can be stored in a Docker registry, or in an OpenShift ImageStream. To store images in an ImageStream, you must create the ImageStream before you deploy Kafka Connect. ImageStreams are not created automatically. Note If you use a KafkaConnect resource to create a cluster, afterwards you cannot use the Kafka Connect REST API to create or update connectors. You can still use the REST API to retrieve information. Additional resources Configuring Kafka Connect in Using AMQ Streams on OpenShift. Creating a new container image automatically using AMQ Streams in Deploying and Managing AMQ Streams on OpenShift. 7.5.3. Using AMQ Streams to deploy a Debezium Oracle connector With earlier versions of AMQ Streams, to deploy Debezium connectors on OpenShift, you were required to first build a Kafka Connect image for the connector. The current preferred method for deploying connectors on OpenShift is to use a build configuration in AMQ Streams to automatically build a Kafka Connect container image that includes the Debezium connector plug-ins that you want to use. During the build process, the AMQ Streams Operator transforms input parameters in a KafkaConnect custom resource, including Debezium connector definitions, into a Kafka Connect container image. The build downloads the necessary artifacts from the Red Hat Maven repository or another configured HTTP server. The newly created container is pushed to the container registry that is specified in .spec.build.output , and is used to deploy a Kafka Connect cluster. After AMQ Streams builds the Kafka Connect image, you create KafkaConnector custom resources to start the connectors that are included in the build. Prerequisites You have access to an OpenShift cluster on which the cluster Operator is installed. The AMQ Streams Operator is running. An Apache Kafka cluster is deployed as documented in Deploying and Upgrading AMQ Streams on OpenShift . Kafka Connect is deployed on AMQ Streams You have a Red Hat Integration license. The OpenShift oc CLI client is installed or you have access to the OpenShift Container Platform web console. Depending on how you intend to store the Kafka Connect build image, you need registry permissions or you must create an ImageStream resource: To store the build image in an image registry, such as Red Hat Quay.io or Docker Hub An account and permissions to create and manage images in the registry. To store the build image as a native OpenShift ImageStream An ImageStream resource is deployed to the cluster for storing new container images. You must explicitly create an ImageStream for the cluster. ImageStreams are not available by default. For more information about ImageStreams, see Managing image streams on OpenShift Container Platform . Procedure Log in to the OpenShift cluster. Create a Debezium KafkaConnect custom resource (CR) for the connector, or modify an existing one. For example, create a KafkaConnect CR with the name dbz-connect.yaml that specifies the metadata.annotations and spec.build properties. The following example shows an excerpt from a dbz-connect.yaml file that describes a KafkaConnect custom resource. Example 7.1. A dbz-connect.yaml file that defines a KafkaConnect custom resource that includes a Debezium connector In the example that follows, the custom resource is configured to download the following artifacts: The Debezium Oracle connector archive. The Service Registry archive. The Service Registry is an optional component. Add the Service Registry component only if you intend to use Avro serialization with the connector. The Debezium scripting SMT archive and the associated language dependencies that you want to use with the Debezium connector. The SMT archive and language dependencies are optional components. Add these components only if you intend to use the Debezium content-based routing SMT or filter SMT . The Oracle JDBC driver, which is required to connect to Oracle databases, but is not included in the connector archive. apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: debezium-kafka-connect-cluster annotations: strimzi.io/use-connector-resources: "true" 1 spec: version: 3.5.0 build: 2 output: 3 type: imagestream 4 image: debezium-streams-connect:latest plugins: 5 - name: debezium-connector-oracle artifacts: - type: zip 6 url: https://maven.repository.redhat.com/ga/io/debezium/debezium-connector-oracle/2.3.4.Final-redhat-00001/debezium-connector-oracle-2.3.4.Final-redhat-00001-plugin.zip 7 - type: zip url: https://maven.repository.redhat.com/ga/io/apicurio/apicurio-registry-distro-connect-converter/2.4.4.Final-redhat- <build-number> /apicurio-registry-distro-connect-converter-2.4.4.Final-redhat- <build-number> .zip 8 - type: zip url: https://maven.repository.redhat.com/ga/io/debezium/debezium-scripting/2.3.4.Final-redhat-00001/debezium-scripting-2.3.4.Final-redhat-00001.zip 9 - type: jar url: https://repo1.maven.org/maven2/org/codehaus/groovy/groovy/3.0.11/groovy-3.0.11.jar 10 - type: jar url: https://repo1.maven.org/maven2/org/codehaus/groovy/groovy-jsr223/3.0.11/groovy-jsr223-3.0.11.jar - type: jar url: https://repo1.maven.org/maven2/org/codehaus/groovy/groovy-json3.0.11/groovy-json-3.0.11.jar - type: jar 11 url: https://repo1.maven.org/maven2/com/oracle/database/jdbc/ojdbc8/21.6.0.0/ojdbc8-21.6.0.0.jar bootstrapServers: debezium-kafka-cluster-kafka-bootstrap:9093 ... Table 7.14. Descriptions of Kafka Connect configuration settings Item Description 1 Sets the strimzi.io/use-connector-resources annotation to "true" to enable the Cluster Operator to use KafkaConnector resources to configure connectors in this Kafka Connect cluster. 2 The spec.build configuration specifies where to store the build image and lists the plug-ins to include in the image, along with the location of the plug-in artifacts. 3 The build.output specifies the registry in which the newly built image is stored. 4 Specifies the name and image name for the image output. Valid values for output.type are docker to push into a container registry such as Docker Hub or Quay, or imagestream to push the image to an internal OpenShift ImageStream. To use an ImageStream, an ImageStream resource must be deployed to the cluster. For more information about specifying the build.output in the KafkaConnect configuration, see the AMQ Streams Build schema reference in Configuring AMQ Streams on OpenShift. 5 The plugins configuration lists all of the connectors that you want to include in the Kafka Connect image. For each entry in the list, specify a plug-in name , and information for about the artifacts that are required to build the connector. Optionally, for each connector plug-in, you can include other components that you want to be available for use with the connector. For example, you can add Service Registry artifacts, or the Debezium scripting component. 6 The value of artifacts.type specifies the file type of the artifact specified in the artifacts.url . Valid types are zip , tgz , or jar . Debezium connector archives are provided in .zip file format. JDBC driver files are in .jar format. The type value must match the type of the file that is referenced in the url field. 7 The value of artifacts.url specifies the address of an HTTP server, such as a Maven repository, that stores the file for the connector artifact. Debezium connector artifacts are available in the Red Hat Maven repository. The OpenShift cluster must have access to the specified server. 8 (Optional) Specifies the artifact type and url for downloading the Service Registry component. Include the Service Registry artifact, only if you want the connector to use Apache Avro to serialize event keys and values with the Service Registry, instead of using the default JSON converter. 9 (Optional) Specifies the artifact type and url for the Debezium scripting SMT archive to use with the Debezium connector. Include the scripting SMT only if you intend to use the Debezium content-based routing SMT or filter SMT To use the scripting SMT, you must also deploy a JSR 223-compliant scripting implementation, such as groovy. 10 (Optional) Specifies the artifact type and url for the JAR files of a JSR 223-compliant scripting implementation, which is required by the Debezium scripting SMT. Important If you use AMQ Streams to incorporate the connector plug-in into your Kafka Connect image, for each of the required scripting language components artifacts.url must specify the location of a JAR file, and the value of artifacts.type must also be set to jar . Invalid values cause the connector fails at runtime. To enable use of the Apache Groovy language with the scripting SMT, the custom resource in the example retrieves JAR files for the following libraries: groovy groovy-jsr223 (scripting agent) groovy-json (module for parsing JSON strings) The Debezium scripting SMT also supports the use of the JSR 223 implementation of GraalVM JavaScript. 11 Specifies the location of the Oracle JDBC driver in Maven Central. The required driver is not included in the Debezium Oracle connector archive. Apply the KafkaConnect build specification to the OpenShift cluster by entering the following command: oc create -f dbz-connect.yaml Based on the configuration specified in the custom resource, the Streams Operator prepares a Kafka Connect image to deploy. After the build completes, the Operator pushes the image to the specified registry or ImageStream, and starts the Kafka Connect cluster. The connector artifacts that you listed in the configuration are available in the cluster. Create a KafkaConnector resource to define an instance of each connector that you want to deploy. For example, create the following KafkaConnector CR, and save it as oracle-inventory-connector.yaml Example 7.2. oracle-inventory-connector.yaml file that defines the KafkaConnector custom resource for a Debezium connector apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: labels: strimzi.io/cluster: debezium-kafka-connect-cluster name: inventory-connector-oracle 1 spec: class: io.debezium.connector.oracle.OracleConnector 2 tasksMax: 1 3 config: 4 schema.history.internal.kafka.bootstrap.servers: debezium-kafka-cluster-kafka-bootstrap.debezium.svc.cluster.local:9092 schema.history.internal.kafka.topic: schema-changes.inventory database.hostname: oracle.debezium-oracle.svc.cluster.local 5 database.port: 1521 6 database.user: debezium 7 database.password: dbz 8 database.dbname: mydatabase 9 topic.prefix: inventory-connector-oracle 10 table.include.list: PUBLIC.INVENTORY 11 ... Table 7.15. Descriptions of connector configuration settings Item Description 1 The name of the connector to register with the Kafka Connect cluster. 2 The name of the connector class. 3 The number of tasks that can operate concurrently. 4 The connector's configuration. 5 The address of the host database instance. 6 The port number of the database instance. 7 The name of the account that Debezium uses to connect to the database. 8 The password that Debezium uses to connect to the database user account. 9 The name of the database to capture changes from. 10 The topic prefix for the database instance or cluster. The specified name must be formed only from alphanumeric characters or underscores. Because the topic prefix is used as the prefix for any Kafka topics that receive change events from this connector, the name must be unique among the connectors in the cluster. This namespace is also used in the names of related Kafka Connect schemas, and the namespaces of a corresponding Avro schema if you integrate the connector with the Avro connector . 11 The list of tables from which the connector captures change events. Create the connector resource by running the following command: oc create -n <namespace> -f <kafkaConnector> .yaml For example, oc create -n debezium -f {context}-inventory-connector.yaml The connector is registered to the Kafka Connect cluster and starts to run against the database that is specified by spec.config.database.dbname in the KafkaConnector CR. After the connector pod is ready, Debezium is running. You are now ready to verify the Debezium Oracle deployment . 7.5.4. Deploying a Debezium Oracle connector by building a custom Kafka Connect container image from a Dockerfile To deploy a Debezium Oracle connector, you must build a custom Kafka Connect container image that contains the Debezium connector archive, and then push this container image to a container registry. You then need to create the following custom resources (CRs): A KafkaConnect CR that defines your Kafka Connect instance. The image property in the CR specifies the name of the container image that you create to run your Debezium connector. You apply this CR to the OpenShift instance where Red Hat AMQ Streams is deployed. AMQ Streams offers operators and images that bring Apache Kafka to OpenShift. A KafkaConnector CR that defines your Debezium Oracle connector. Apply this CR to the same OpenShift instance where you apply the KafkaConnect CR. Prerequisites Oracle Database is running and you completed the steps to set up Oracle to work with a Debezium connector . AMQ Streams is deployed on OpenShift and is running Apache Kafka and Kafka Connect. For more information, see Deploying and Upgrading AMQ Streams on OpenShift Podman or Docker is installed. You have an account and permissions to create and manage containers in the container registry (such as quay.io or docker.io ) to which you plan to add the container that will run your Debezium connector. The Kafka Connect server has access to Maven Central to download the required JDBC driver for Oracle. You can also use a local copy of the driver, or one that is available from a local Maven repository or other HTTP server. For more information, see Obtaining the Oracle JDBC driver . Procedure Create the Debezium Oracle container for Kafka Connect: Create a Dockerfile that uses registry.redhat.io/amq-streams-kafka-35-rhel8:2.5.0 as the base image. For example, from a terminal window, enter the following command: cat <<EOF >debezium-container-for-oracle.yaml 1 FROM registry.redhat.io/amq-streams-kafka-35-rhel8:2.5.0 USER root:root RUN mkdir -p /opt/kafka/plugins/debezium 2 RUN cd /opt/kafka/plugins/debezium/ \ && curl -O https://maven.repository.redhat.com/ga/io/debezium/debezium-connector-oracle/2.3.4.Final-redhat-00001/debezium-connector-oracle-2.3.4.Final-redhat-00001-plugin.zip \ && unzip debezium-connector-oracle-2.3.4.Final-redhat-00001-plugin.zip \ && rm debezium-connector-oracle-2.3.4.Final-redhat-00001-plugin.zip RUN cd /opt/kafka/plugins/debezium/ \ && curl -O https://repo1.maven.org/maven2/com/oracle/ojdbc/ojdbc8/21.1.0.0/ojdbc8-21.1.0.0.jar USER 1001 EOF Item Description 1 You can specify any file name that you want. 2 Specifies the path to your Kafka Connect plug-ins directory. If your Kafka Connect plug-ins directory is in a different location, replace this path with the actual path of your directory. The command creates a Dockerfile with the name debezium-container-for-oracle.yaml in the current directory. Build the container image from the debezium-container-for-oracle.yaml Docker file that you created in the step. From the directory that contains the file, open a terminal window and enter one of the following commands: podman build -t debezium-container-for-oracle:latest . docker build -t debezium-container-for-oracle:latest . The preceding commands build a container image with the name debezium-container-for-oracle . Push your custom image to a container registry, such as quay.io or an internal container registry. The container registry must be available to the OpenShift instance where you want to deploy the image. Enter one of the following commands: podman push <myregistry.io> /debezium-container-for-oracle:latest docker push <myregistry.io> /debezium-container-for-oracle:latest Create a new Debezium Oracle KafkaConnect custom resource (CR). For example, create a KafkaConnect CR with the name dbz-connect.yaml that specifies annotations and image properties. The following example shows an excerpt from a dbz-connect.yaml file that describes a KafkaConnect custom resource. apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster annotations: strimzi.io/use-connector-resources: "true" 1 spec: image: debezium-container-for-oracle 2 ... Item Description 1 metadata.annotations indicates to the Cluster Operator that KafkaConnector resources are used to configure connectors in this Kafka Connect cluster. 2 spec.image specifies the name of the image that you created to run your Debezium connector. This property overrides the STRIMZI_DEFAULT_KAFKA_CONNECT_IMAGE variable in the Cluster Operator. Apply the KafkaConnect CR to the OpenShift Kafka Connect environment by entering the following command: oc create -f dbz-connect.yaml The command adds a Kafka Connect instance that specifies the name of the image that you created to run your Debezium connector. Create a KafkaConnector custom resource that configures your Debezium Oracle connector instance. You configure a Debezium Oracle connector in a .yaml file that specifies the configuration properties for the connector. The connector configuration might instruct Debezium to produce events for a subset of the schemas and tables, or it might set properties so that Debezium ignores, masks, or truncates values in specified columns that are sensitive, too large, or not needed. The following example configures a Debezium connector that connects to an Oracle host IP address, on port 1521 . This host has a database named ORCLCDB , and server1 is the server's logical name. Oracle inventory-connector.yaml apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: inventory-connector-oracle 1 labels: strimzi.io/cluster: my-connect-cluster annotations: strimzi.io/use-connector-resources: 'true' spec: class: io.debezium.connector.oracle.OracleConnector 2 config: database.hostname: <oracle_ip_address> 3 database.port: 1521 4 database.user: c##dbzuser 5 database.password: dbz 6 database.dbname: ORCLCDB 7 database.pdb.name : ORCLPDB1, 8 topic.prefix: inventory-connector-oracle 9 schema.history.internal.kafka.bootstrap.servers: kafka:9092 10 schema.history.internal.kafka.topic: schema-changes.inventory 11 Table 7.16. Descriptions of connector configuration settings Item Description 1 The name of our connector when we register it with a Kafka Connect service. 2 The name of this Oracle connector class. 3 The address of the Oracle instance. 4 The port number of the Oracle instance. 5 The name of the Oracle user, as specified in Creating users for the connector . 6 The password for the Oracle user, as specified in Creating users for the connector . 7 The name of the database to capture changes from. 8 The name of the Oracle pluggable database that the connector captures changes from. Used in container database (CDB) installations only. 9 Topic prefix identifies and provides a namespace for the Oracle database server from which the connector captures changes. 10 The list of Kafka brokers that this connector uses to write and recover DDL statements to the database schema history topic. 11 The name of the database schema history topic where the connector writes and recovers DDL statements. This topic is for internal use only and should not be used by consumers. Create your connector instance with Kafka Connect. For example, if you saved your KafkaConnector resource in the inventory-connector.yaml file, you would run the following command: oc apply -f inventory-connector.yaml The preceding command registers inventory-connector and the connector starts to run against the server1 database as defined in the KafkaConnector CR. For the complete list of the configuration properties that you can set for the Debezium Oracle connector, see Oracle connector properties . Results After the connector starts, it performs a consistent snapshot of the Oracle databases that the connector is configured for. The connector then starts generating data change events for row-level operations and streaming the change event records to Kafka topics. 7.5.5. Configuration of container databases and non-container-databases Oracle Database supports the following deployment types: Container database (CDB) A database that can contain multiple pluggable databases (PDBs). Database clients connect to each PDB as if it were a standard, non-CDB database. Non-container database (non-CDB) A standard Oracle database, which does not support the creation of pluggable databases. 7.5.6. Verifying that the Debezium Oracle connector is running If the connector starts correctly without errors, it creates a topic for each table that the connector is configured to capture. Downstream applications can subscribe to these topics to retrieve information events that occur in the source database. To verify that the connector is running, you perform the following operations from the OpenShift Container Platform web console, or through the OpenShift CLI tool (oc): Verify the connector status. Verify that the connector generates topics. Verify that topics are populated with events for read operations ("op":"r") that the connector generates during the initial snapshot of each table. Prerequisites A Debezium connector is deployed to AMQ Streams on OpenShift. The OpenShift oc CLI client is installed. You have access to the OpenShift Container Platform web console. Procedure Check the status of the KafkaConnector resource by using one of the following methods: From the OpenShift Container Platform web console: Navigate to Home Search . On the Search page, click Resources to open the Select Resource box, and then type KafkaConnector . From the KafkaConnectors list, click the name of the connector that you want to check, for example inventory-connector-oracle . In the Conditions section, verify that the values in the Type and Status columns are set to Ready and True . From a terminal window: Enter the following command: oc describe KafkaConnector <connector-name> -n <project> For example, oc describe KafkaConnector inventory-connector-oracle -n debezium The command returns status information that is similar to the following output: Example 7.3. KafkaConnector resource status Name: inventory-connector-oracle Namespace: debezium Labels: strimzi.io/cluster=debezium-kafka-connect-cluster Annotations: <none> API Version: kafka.strimzi.io/v1beta2 Kind: KafkaConnector ... Status: Conditions: Last Transition Time: 2021-12-08T17:41:34.897153Z Status: True Type: Ready Connector Status: Connector: State: RUNNING worker_id: 10.131.1.124:8083 Name: inventory-connector-oracle Tasks: Id: 0 State: RUNNING worker_id: 10.131.1.124:8083 Type: source Observed Generation: 1 Tasks Max: 1 Topics: inventory-connector-oracle.inventory inventory-connector-oracle.inventory.addresses inventory-connector-oracle.inventory.customers inventory-connector-oracle.inventory.geom inventory-connector-oracle.inventory.orders inventory-connector-oracle.inventory.products inventory-connector-oracle.inventory.products_on_hand Events: <none> Verify that the connector created Kafka topics: From the OpenShift Container Platform web console. Navigate to Home Search . On the Search page, click Resources to open the Select Resource box, and then type KafkaTopic . From the KafkaTopics list, click the name of the topic that you want to check, for example, inventory-connector-oracle.inventory.orders---ac5e98ac6a5d91e04d8ec0dc9078a1ece439081d . In the Conditions section, verify that the values in the Type and Status columns are set to Ready and True . From a terminal window: Enter the following command: oc get kafkatopics The command returns status information that is similar to the following output: Example 7.4. KafkaTopic resource status Check topic content. From a terminal window, enter the following command: oc exec -n <project> -it <kafka-cluster> -- /opt/kafka/bin/kafka-console-consumer.sh \ > --bootstrap-server localhost:9092 \ > --from-beginning \ > --property print.key=true \ > --topic= <topic-name > For example, oc exec -n debezium -it debezium-kafka-cluster-kafka-0 -- /opt/kafka/bin/kafka-console-consumer.sh \ > --bootstrap-server localhost:9092 \ > --from-beginning \ > --property print.key=true \ > --topic=inventory-connector-oracle.inventory.products_on_hand The format for specifying the topic name is the same as the oc describe command returns in Step 1, for example, inventory-connector-oracle.inventory.addresses . For each event in the topic, the command returns information that is similar to the following output: Example 7.5. Content of a Debezium change event In the preceding example, the payload value shows that the connector snapshot generated a read ( "op" ="r" ) event from the table inventory.products_on_hand . The "before" state of the product_id record is null , indicating that no value exists for the record. The "after" state shows a quantity of 3 for the item with product_id 101 . 7.6. Descriptions of Debezium Oracle connector configuration properties The Debezium Oracle connector has numerous configuration properties that you can use to achieve the right connector behavior for your application. Many properties have default values. Information about the properties is organized as follows: Required Debezium Oracle connector configuration properties Database schema history connector configuration properties that control how Debezium processes events that it reads from the database schema history topic. Pass-through database schema history properties Pass-through database driver properties that control the behavior of the database driver. Required Debezium Oracle connector configuration properties The following configuration properties are required unless a default value is available. Property Default Description name No default Unique name for the connector. Attempting to register again with the same name will fail. (This property is required by all Kafka Connect connectors.) connector.class No default The name of the Java class for the connector. Always use a value of io.debezium.connector.oracle.OracleConnector for the Oracle connector. converters No default Enumerates a comma-separated list of the symbolic names of the custom converter instances that the connector can use. For example, boolean . This property is required to enable the connector to use a custom converter. For each converter that you configure for a connector, you must also add a .type property, which specifies the fully-qualifed name of the class that implements the converter interface. The .type property uses the following format: <converterSymbolicName> .type For example, If you want to further control the behavior of a configured converter, you can add one or more configuration parameters to pass values to the converter. To associate any additional configuration parameters with a converter, prefix the parameter names with the symbolic name of the converter. For example, to define a selector parameter that specifies the subset of columns that the boolean converter processes, add the following property: tasks.max 1 The maximum number of tasks to create for this connector. The Oracle connector always uses a single task and therefore does not use this value, so the default is always acceptable. database.hostname No default IP address or hostname of the Oracle database server. database.port No default Integer port number of the Oracle database server. database.user No default Name of the Oracle user account that the connector uses to connect to the Oracle database server. database.password No default Password to use when connecting to the Oracle database server. database.dbname No default Name of the database to connect to. In a container database environment, specify the name of the root container database (CDB), not the name of an included pluggable database (PDB). database.url No default Specifies the raw database JDBC URL. Use this property to provide flexibility in defining that database connection. Valid values include raw TNS names and RAC connection strings. database.pdb.name No default Name of the Oracle pluggable database to connect to. Use this property with container database (CDB) installations only. topic.prefix No default Topic prefix that provides a namespace for the Oracle database server from which the connector captures changes. The value that you set is used as a prefix for all Kafka topic names that the connector emits. Specify a topic prefix that is unique among all connectors in your Debezium environment. The following characters are valid: alphanumeric characters, hyphens, dots, and underscores. Warning Do not change the value of this property. If you change the name value, after a restart, instead of continuing to emit events to the original topics, the connector emits subsequent events to topics whose names are based on the new value. The connector is also unable to recover its database schema history topic. database.connection.adapter logminer The adapter implementation that the connector uses when it streams database changes. You can set the following values: logminer (default):: The connector uses the native Oracle LogMiner API. snapshot.mode initial Specifies the mode that the connector uses to take snapshots of a captured table. You can set the following values: always The snapshot includes the structure and data of the captured tables. Specify this value to populate topics with a complete representation of the data from the captured tables on each connector start. initial The snapshot includes the structure and data of the captured tables. Specify this value to populate topics with a complete representation of the data from the captured tables. If the snapshot completes successfully, upon connector start snapshot is not executed again. initial_only The snapshot includes the structure and data of the captured tables. The connector performs an initial snapshot and then stops, without processing any subsequent changes. schema_only The snapshot includes only the structure of captured tables. Specify this value if you want the connector to capture data only for changes that occur after the snapshot. schema_only_recovery This is a recovery setting for a connector that has already been capturing changes. When you restart the connector, this setting enables recovery of a corrupted or lost database schema history topic. You might set it periodically to "clean up" a database schema history topic that has been growing unexpectedly. Database schema history topics require infinite retention. Note this mode is only safe to be used when it is guaranteed that no schema changes happened since the point in time the connector was shut down before and the point in time the snapshot is taken. After the snapshot is complete, the connector continues to read change events from the database's redo logs except when snapshot.mode is configured as initial_only . For more information, see the table of snapshot.mode options . snapshot.locking.mode shared Controls whether and for how long the connector holds a table lock. Table locks prevent certain types of changes table operations from occurring while the connector performs a snapshot. You can set the following values: shared Enables concurrent access to the table, but prevents any session from acquiring an exclusive table lock. The connector acquires a ROW SHARE level lock while it captures table schema. none Prevents the connector from acquiring any table locks during the snapshot. Use this setting only if no schema changes might occur during the creation of the snapshot. snapshot.include.collection.list All tables specified in the connector's table.include.list property. An optional, comma-separated list of regular expressions that match the fully-qualified names ( <databaseName>. <schemaName> . <tableName> ) of the tables to include in a snapshot. In a multitenant container database (CDB) environment, the regular expression must include the pluggable database (PDB) name , using the format <pdbName> . <schemaName> . <tableName> . To match the name of a table, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the table; it does not match substrings that might be present in a table name. Only POSIX regular expressions are valid. A snapshot can only include tables that are named in the connector's table.include.list property. This property takes effect only if the connector's snapshot.mode property is set to a value other than never . This property does not affect the behavior of incremental snapshots. snapshot.select.statement.overrides No default Specifies the table rows to include in a snapshot. Use the property if you want a snapshot to include only a subset of the rows in a table. This property affects snapshots only. It does not apply to events that the connector reads from the log. The property contains a comma-separated list of fully-qualified table names in the form <schemaName>.<tableName> . For example, "snapshot.select.statement.overrides": "inventory.products,customers.orders" For each table in the list, add a further configuration property that specifies the SELECT statement for the connector to run on the table when it takes a snapshot. The specified SELECT statement determines the subset of table rows to include in the snapshot. Use the following format to specify the name of this SELECT statement property: snapshot.select.statement.overrides. <schemaName> . <tableName> For example, snapshot.select.statement.overrides.customers.orders Example: From a customers.orders table that includes the soft-delete column, delete_flag , add the following properties if you want a snapshot to include only those records that are not soft-deleted: In the resulting snapshot, the connector includes only the records for which delete_flag = 0 . schema.include.list No default An optional, comma-separated list of regular expressions that match names of schemas for which you want to capture changes. Only POSIX regular expressions are valid. Any schema name not included in schema.include.list is excluded from having its changes captured. By default, all non-system schemas have their changes captured. To match the name of a schema, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the schema; it does not match substrings that might be present in a schema name. If you include this property in the configuration, do not also set the schema.exclude.list property. include.schema.comments false Boolean value that specifies whether the connector should parse and publish table and column comments on metadata objects. Enabling this option will bring the implications on memory usage. The number and size of logical schema objects is what largely impacts how much memory is consumed by the Debezium connectors, and adding potentially large string data to each of them can potentially be quite expensive. schema.exclude.list No default An optional, comma-separated list of regular expressions that match names of schemas for which you do not want to capture changes. Only POSIX regular expressions are valid. Any schema whose name is not included in schema.exclude.list has its changes captured, with the exception of system schemas. To match the name of a schema, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the schema; it does not match substrings that might be present in a schema name. If you include this property in the configuration, do not set the`schema.include.list` property. table.include.list No default An optional comma-separated list of regular expressions that match fully-qualified table identifiers for tables to be captured. Only POSIX regular expressions are valid. When this property is set, the connector captures changes only from the specified tables. Each table identifier uses the following format: <schema_name>.<table_name> By default, the connector monitors every non-system table in each captured database. To match the name of a table, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the table; it does not match substrings that might be present in a table name. If you include this property in the configuration, do not also set the table.exclude.list property. table.exclude.list No default An optional comma-separated list of regular expressions that match fully-qualified table identifiers for tables to be excluded from monitoring. Only POSIX regular expressions are valid. The connector captures change events from any table that is not specified in the exclude list. Specify the identifier for each table using the following format: <schemaName>.<tableName> . To match the name of a table, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the table; it does not match substrings that might be present in a table name. If you include this property in the configuration, do not also set the table.include.list property. column.include.list No default An optional, comma-separated list of regular expressions that match the fully-qualified names of columns that want to include in the change event message values. Only POSIX regular expressions are valid. Fully-qualified names for columns use the following format: <Schema_name>.<table_name>.<column_name> The primary key column is always included in an event's key, even if you do not use this property to explicitly include its value. To match the name of a column, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the column it does not match substrings that might be present in a column name. If you include this property in the configuration, do not also set the column.exclude.list property. column.exclude.list No default An optional, comma-separated list of regular expressions that match the fully-qualified names of columns that you want to exclude from change event message values. Only POSIX regular expressions are valid. Fully-qualified column names use the following format: <schema_name>.<table_name>.<column_name> The primary key column is always included in an event's key, even if you use this property to explicitly exclude its value. To match the name of a column, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the column it does not match substrings that might be present in a column name. If you include this property in the configuration, do not set the column.include.list property. skip.messages.without.change false Specifies whether to skip publishing messages when there is no change in included columns. This would essentially filter messages if there is no change in columns included as per column.include.list or column.exclude.list properties. column.mask.hash. hashAlgorithm .with.salt. salt ; column.mask.hash.v2. hashAlgorithm .with.salt. salt n/a An optional, comma-separated list of regular expressions that match the fully-qualified names of character-based columns. Fully-qualified names for columns are of the form <schemaName> . <tableName> . <columnName> . To match the name of a column Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the column; the expression does not match substrings that might be present in a column name. In the resulting change event record, the values for the specified columns are replaced with pseudonyms. A pseudonym consists of the hashed value that results from applying the specified hashAlgorithm and salt . Based on the hash function that is used, referential integrity is maintained, while column values are replaced with pseudonyms. Supported hash functions are described in the MessageDigest section of the Java Cryptography Architecture Standard Algorithm Name Documentation. In the following example, CzQMA0cB5K is a randomly selected salt. If necessary, the pseudonym is automatically shortened to the length of the column. The connector configuration can include multiple properties that specify different hash algorithms and salts. Depending on the hashAlgorithm used, the salt selected, and the actual data set, the resulting data set might not be completely masked. Hashing strategy version 2 should be used to ensure fidelity if the value is being hashed in different places or systems. binary.handling.mode bytes Specifies how binary ( blob ) columns should be represented in change events, including: bytes represents binary data as byte array (default), base64 represents binary data as base64-encoded String, base64-url-safe represents binary data as base64-url-safe-encoded String, hex represents binary data as hex-encoded (base16) String schema.name.adjustment.mode none Specifies how schema names should be adjusted for compatibility with the message converter used by the connector. Possible settings: none does not apply any adjustment. avro replaces the characters that cannot be used in the Avro type name with underscore. avro_unicode replaces the underscore or characters that cannot be used in the Avro type name with corresponding unicode like _uxxxx. Note: _ is an escape sequence like backslash in Java field.name.adjustment.mode none Specifies how field names should be adjusted for compatibility with the message converter used by the connector. Possible settings: none does not apply any adjustment. avro replaces the characters that cannot be used in the Avro type name with underscore. avro_unicode replaces the underscore or characters that cannot be used in the Avro type name with corresponding unicode like _uxxxx. Note: _ is an escape sequence like backslash in Java See Avro naming for more details. decimal.handling.mode precise Specifies how the connector should handle floating point values for NUMBER , DECIMAL and NUMERIC columns. You can set one of the following options: precise (default) Represents values precisely by using java.math.BigDecimal values represented in change events in a binary form. double Represents values by using double values. Using double values is easier, but can result in a loss of precision. string Encodes values as formatted strings. Using the string option is easier to consume, but results in a loss of semantic information about the real type. For more information, see Numeric types . interval.handling.mode numeric Specifies how the connector should handle values for interval columns: numeric represents intervals using approximate number of microseconds. string represents intervals exactly by using the string pattern representation P<years>Y<months>M<days>DT<hours>H<minutes>M<seconds>S . For example: P1Y2M3DT4H5M6.78S . event.processing.failure.handling.mode fail Specifies how the connector should react to exceptions during processing of events. You can set one of the following options: fail Propagates the exception (indicating the offset of the problematic event), causing the connector to stop. warn Causes the problematic event to be skipped. The offset of the problematic event is then logged. skip Causes the problematic event to be skipped. max.batch.size 2048 A positive integer value that specifies the maximum size of each batch of events to process during each iteration of this connector. max.queue.size 8192 Positive integer value that specifies the maximum number of records that the blocking queue can hold. When Debezium reads events streamed from the database, it places the events in the blocking queue before it writes them to Kafka. The blocking queue can provide backpressure for reading change events from the database in cases where the connector ingests messages faster than it can write them to Kafka, or when Kafka becomes unavailable. Events that are held in the queue are disregarded when the connector periodically records offsets. Always set the value of max.queue.size to be larger than the value of max.batch.size . max.queue.size.in.bytes 0 (disabled) A long integer value that specifies the maximum volume of the blocking queue in bytes. By default, volume limits are not specified for the blocking queue. To specify the number of bytes that the queue can consume, set this property to a positive long value. If max.queue.size is also set, writing to the queue is blocked when the size of the queue reaches the limit specified by either property. For example, if you set max.queue.size=1000 , and max.queue.size.in.bytes=5000 , writing to the queue is blocked after the queue contains 1000 records, or after the volume of the records in the queue reaches 5000 bytes. poll.interval.ms 500 (0.5 second) Positive integer value that specifies the number of milliseconds the connector should wait during each iteration for new change events to appear. tombstones.on.delete true Controls whether a delete event is followed by a tombstone event. The following values are possible: true For each delete operation, the connector emits a delete event and a subsequent tombstone event. false For each delete operation, the connector emits only a delete event. After a source record is deleted, a tombstone event (the default behavior) enables Kafka to completely delete all events that share the key of the deleted row in topics that have log compaction enabled. message.key.columns No default A list of expressions that specify the columns that the connector uses to form custom message keys for change event records that it publishes to the Kafka topics for specified tables. By default, Debezium uses the primary key column of a table as the message key for records that it emits. In place of the default, or to specify a key for tables that lack a primary key, you can configure custom message keys based on one or more columns. To establish a custom message key for a table, list the table, followed by the columns to use as the message key. Each list entry takes the following format: <fullyQualifiedTableName> : <keyColumn> , <keyColumn> To base a table key on multiple column names, insert commas between the column names. Each fully-qualified table name is a regular expression in the following format: <schemaName> . <tableName> The property can include entries for multiple tables. Use a semicolon to separate table entries in the list. The following example sets the message key for the tables inventory.customers and purchase.orders : inventory.customers:pk1,pk2;(.*).purchaseorders:pk3,pk4 For the table inventory.customer , the columns pk1 and pk2 are specified as the message key. For the purchaseorders tables in any schema, the columns pk3 and pk4 server as the message key. There is no limit to the number of columns that you use to create custom message keys. However, it's best to use the minimum number that are required to specify a unique key. column.truncate.to. length .chars No default An optional, comma-separated list of regular expressions that match the fully-qualified names of character-based columns. Set this property if you want the connector to mask the values for a set of columns, for example, if they contain sensitive data. Set length to a positive integer to replace data in the specified columns with the number of asterisk ( * ) characters specified by the length in the property name. Set length to 0 (zero) to replace data in the specified columns with an empty string. The fully-qualified name of a column observes the following format: <schemaName> . <tableName> . <columnName> . To match the name of a column, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the column; the expression does not match substrings that might be present in a column name. You can specify multiple properties with different lengths in a single configuration. column.mask.with. length .chars No default An optional comma-separated list of regular expressions for masking column names in change event messages by replacing characters with asterisks ( * ). Specify the number of characters to replace in the name of the property, for example, column.mask.with.8.chars . Specify length as a positive integer or zero. Then add regular expressions to the list for each character-based column name where you want to apply a mask. Use the following format to specify fully-qualified column names: <schemaName> . <tableName> . <columnName> . The connector configuration can include multiple properties that specify different lengths. column.propagate.source.type No default An optional, comma-separated list of regular expressions that match the fully-qualified names of columns for which you want the connector to emit extra parameters that represent column metadata. When this property is set, the connector adds the following fields to the schema of event records: __debezium.source.column.type __debezium.source.column.length __debezium.source.column.scale These parameters propagate a column's original type name and length (for variable-width types), respectively. Enabling the connector to emit this extra data can assist in properly sizing specific numeric or character-based columns in sink databases. The fully-qualified name of a column observes one of the following formats: <tableName> . <columnName> , or <schemaName> . <tableName> . <columnName> . To match the name of a column, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the column; the expression does not match substrings that might be present in a column name. datatype.propagate.source.type No default An optional, comma-separated list of regular expressions that specify the fully-qualified names of data types that are defined for columns in a database. When this property is set, for columns with matching data types, the connector emits event records that include the following extra fields in their schema: __debezium.source.column.type __debezium.source.column.length __debezium.source.column.scale These parameters propagate a column's original type name and length (for variable-width types), respectively. Enabling the connector to emit this extra data can assist in properly sizing specific numeric or character-based columns in sink databases. The fully-qualified name of a column observes one of the following formats: <tableName> . <typeName> , or <schemaName> . <tableName> . <typeName> . To match the name of a data type, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the data type; the expression does not match substrings that might be present in a type name. For the list of Oracle-specific data type names, see the Oracle data type mappings . heartbeat.interval.ms 0 Specifies, in milliseconds, how frequently the connector sends messages to a heartbeat topic. Use this property to determine whether the connector continues to receive change events from the source database. It can also be useful to set the property in situations where no change events occur in captured tables for an extended period. In such a case, although the connector continues to read the redo log, it emits no change event messages, so that the offset in the Kafka topic remains unchanged. Because the connector does not flush the latest system change number (SCN) that it read from the database, the database might retain the redo log files for longer than necessary. If the connector restarts, the extended retention period could result in the connector redundantly sending some change events. The default value of 0 prevents the connector from sending any heartbeat messages. heartbeat.action.query No default Specifies a query that the connector executes on the source database when the connector sends a heartbeat message. For example: INSERT INTO test_heartbeat_table (text) VALUES ('test_heartbeat') The connector runs the query after it emits a heartbeat message . Set this property and create a heartbeat table to receive the heartbeat messages to resolve situations in which Debezium fails to synchronize offsets on low-traffic databases that are on the same host as a high-traffic database . After the connector inserts records into the configured table, it is able to receive changes from the low-traffic database and acknowledge SCN changes in the database, so that offsets can be synchronized with the broker. snapshot.delay.ms No default Specifies an interval in milliseconds that the connector waits after it starts before it takes a snapshot. Use this property to prevent snapshot interruptions when you start multiple connectors in a cluster, which might cause re-balancing of connectors. snapshot.fetch.size 10000 Specifies the maximum number of rows that should be read in one go from each table while taking a snapshot. The connector reads table contents in multiple batches of the specified size. query.fetch.size 10000 Specifies the number of rows that will be fetched for each database round-trip of a given query. Using a value of 0 will use the JDBC driver's default fetch size. provide.transaction.metadata false Set the property to true if you want Debezium to generate events with transaction boundaries and enriches data events envelope with transaction metadata. See Transaction Metadata for additional details. log.mining.strategy redo_log_catalog Specifies the mining strategy that controls how Oracle LogMiner builds and uses a given data dictionary for resolving table and column ids to names. redo_log_catalog :: Writes the data dictionary to the online redo logs causing more archive logs to be generated over time. This also enables tracking DDL changes against captured tables, so if the schema changes frequently this is the ideal choice. online_catalog :: Uses the database's current data dictionary to resolve object ids and does not write any extra information to the online redo logs. This allows LogMiner to mine substantially faster but at the expense that DDL changes cannot be tracked. If the captured table(s) schema changes infrequently or never, this is the ideal choice. log.mining.query.filter.mode none Specifies the mining query mode that controls how the Oracle LogMiner query is built. none :: The query is generated without doing any schema, table, or username filtering in the query. in :: The query is generated using a standard SQL in-clause to filter schema, table, and usernames on the database side. The schema, table, and username configuration include/exclude lists should not specify any regular expressions as the query is built using the values directly. regex :: The query is generated using Oracle's REGEXP_LIKE operator to filter schema and table names on the database side, along with usernames using a SQL in-clause. The schema and table configuration include/exclude lists can safely specify regular expressions. log.mining.buffer.type memory The buffer type controls how the connector manages buffering transaction data. memory - Uses the JVM process' heap to buffer all transaction data. Choose this option if you don't expect the connector to process a high number of long-running or large transactions. When this option is active, the buffer state is not persisted across restarts. Following a restart, recreate the buffer from the SCN value of the current offset. log.mining.session.max.ms 0 The maximum number of milliseconds that a LogMiner session can be active before a new session is used. For low volume systems, a LogMiner session may consume too much PGA memory when the same session is used for a long period of time. The default behavior is to only use a new LogMiner session when a log switch is detected. By setting this value to something greater than 0 , this specifies the maximum number of milliseconds a LogMiner session can be active before it gets stopped and started to deallocate and reallocate PGA memory. log.mining.restart.connection false Specifies whether the JDBC connection will be closed and re-opened on log switches or when mining session has reached maximum lifetime threshold. By default, the JDBC connection is not closed across log switches or maximum session lifetimes. This should be enabled if you experience excessive Oracle SGA growth with LogMiner. log.mining.batch.size.min 1000 The minimum SCN interval size that this connector attempts to read from redo/archive logs. Active batch size is also increased/decreased by this amount for tuning connector throughput when needed. log.mining.batch.size.max 100000 The maximum SCN interval size that this connector uses when reading from redo/archive logs. log.mining.batch.size.default 20000 The starting SCN interval size that the connector uses for reading data from redo/archive logs. This also servers as a measure for adjusting batch size - when the difference between current SCN and beginning/end SCN of the batch is bigger than this value, batch size is increased/decreased. log.mining.sleep.time.min.ms 0 The minimum amount of time that the connector sleeps after reading data from redo/archive logs and before starting reading data again. Value is in milliseconds. log.mining.sleep.time.max.ms 3000 The maximum amount of time that the connector ill sleeps after reading data from redo/archive logs and before starting reading data again. Value is in milliseconds. log.mining.sleep.time.default.ms 1000 The starting amount of time that the connector sleeps after reading data from redo/archive logs and before starting reading data again. Value is in milliseconds. log.mining.sleep.time.increment.ms 200 The maximum amount of time up or down that the connector uses to tune the optimal sleep time when reading data from logminer. Value is in milliseconds. log.mining.archive.log.hours 0 The number of hours in the past from SYSDATE to mine archive logs. When the default setting ( 0 ) is used, the connector mines all archive logs. log.mining.archive.log.only.mode false Controls whether or not the connector mines changes from just archive logs or a combination of the online redo logs and archive logs (the default). Redo logs use a circular buffer that can be archived at any point. In environments where online redo logs are archived frequently, this can lead to LogMiner session failures. In contrast to redo logs, archive logs are guaranteed to be reliable. Set this option to true to force the connector to mine archive logs only. After you set the connector to mine only the archive logs, the latency between an operation being committed and the connector emitting an associated change event might increase. The degree of latency depends on how frequently the database is configured to archive online redo logs. log.mining.archive.log.only.scn.poll.interval.ms 10000 The number of milliseconds the connector will sleep in between polling to determine if the starting system change number is in the archive logs. If log.mining.archive.log.only.mode is not enabled, this setting is not used. log.mining.transaction.retention.ms 0 Positive integer value that specifies the number of milliseconds to retain long running transactions between redo log switches. When set to 0 , transactions are retained until a commit or rollback is detected. By default, the LogMiner adapter maintains an in-memory buffer of all running transactions. Because all of the DML operations that are part of a transaction are buffered until a commit or rollback is detected, long-running transactions should be avoided in order to not overflow that buffer. Any transaction that exceeds this configured value is discarded entirely, and the connector does not emit any messages for the operations that were part of the transaction. log.mining.archive.destination.name No default Specifies the configured Oracle archive destination to use when mining archive logs with LogMiner. The default behavior automatically selects the first valid, local configured destination. However, you can use a specific destination can be used by providing the destination name, for example, LOG_ARCHIVE_DEST_5 . log.mining.username.include.list No default List of database users to include from the LogMiner query. It can be useful to set this property if you want the capturing process to include changes from the specified users. log.mining.username.exclude.list No default List of database users to exclude from the LogMiner query. It can be useful to set this property if you want the capturing process to always exclude the changes that specific users make. log.mining.scn.gap.detection.gap.size.min 1000000 Specifies a value that the connector compares to the difference between the current and SCN values to determine whether an SCN gap exists. If the difference between the SCN values is greater than the specified value, and the time difference is smaller than log.mining.scn.gap.detection.time.interval.max.ms then an SCN gap is detected, and the connector uses a mining window larger than the configured maximum batch. log.mining.scn.gap.detection.time.interval.max.ms 20000 Specifies a value, in milliseconds, that the connector compares to the difference between the current and SCN timestamps to determine whether an SCN gap exists. If the difference between the timestamps is less than the specified value, and the SCN delta is greater than log.mining.scn.gap.detection.gap.size.min , then an SCN gap is detected and the connector uses a mining window larger than the configured maximum batch. log.mining.flush.table.name LOG_MINING_FLUSH Specifies the name of the flush table that coordinates flushing the Oracle LogWriter Buffer (LGWR) to the redo logs. Typically, multiple connectors can use the same flush table. However, if connectors encounter table lock contention errors, use this property to specify a dedicated table for each connector deployment. lob.enabled false Controls whether or not large object (CLOB or BLOB) column values are emitted in change events. By default, change events have large object columns, but the columns contain no values. There is a certain amount of overhead in processing and managing large object column types and payloads. To capture large object values and serialized them in change events, set this option to true . Note Use of large object data types is a Technology Preview feature. unavailable.value.placeholder __debezium_unavailable_value Specifies the constant that the connector provides to indicate that the original value is unchanged and not provided by the database. rac.nodes No default A comma-separated list of Oracle Real Application Clusters (RAC) node host names or addresses. This field is required to enable compatibility with an Oracle RAC deployment. Specify the list of RAC nodes by using one of the following methods: Specify a value for database.port , and use the specified port value for each address in the rac.nodes list. For example: database.port=1521 rac.nodes=192.168.1.100,192.168.1.101 Specify a value for database.port , and override the default port for one or more entries in the list. The list can include entries that use the default database.port value, and entries that define their own unique port values. For example: database.port=1521 rac.nodes=192.168.1.100,192.168.1.101:1522 If you supply a raw JDBC URL for the database by using the database.url property, instead of defining a value for database.port , each RAC node entry must explicitly specify a port value. skipped.operations t A comma-separated list of the operation types that you want the connector to skip during streaming. You can configure the connector to skip the following types of operations: c (insert/create) u (update) d (delete) t (truncate) By default, only truncate operations are skipped. signal.data.collection No default value Fully-qualified name of the data collection that is used to send signals to the connector. When you use this property with an Oracle pluggable database (PDB), set its value to the name of the root database. Use the following format to specify the collection name: <databaseName> . <schemaName> . <tableName> signal.enabled.channels source List of the signaling channel names that are enabled for the connector. By default, the following channels are available: source kafka file jmx notification.enabled.channels No default List of notification channel names that are enabled for the connector. By default, the following channels are available: sink log jmx incremental.snapshot.chunk.size 1024 The maximum number of rows that the connector fetches and reads into memory during an incremental snapshot chunk. Increasing the chunk size provides greater efficiency, because the snapshot runs fewer snapshot queries of a greater size. However, larger chunk sizes also require more memory to buffer the snapshot data. Adjust the chunk size to a value that provides the best performance in your environment. topic.naming.strategy io.debezium.schema.SchemaTopicNamingStrategy The name of the TopicNamingStrategy class that should be used to determine the topic name for data change, schema change, transaction, heartbeat event etc., defaults to SchemaTopicNamingStrategy . topic.delimiter . Specify the delimiter for topic name, defaults to . . topic.cache.size 10000 The size used for holding the topic names in bounded concurrent hash map. This cache will help to determine the topic name corresponding to a given data collection. topic.heartbeat.prefix __debezium-heartbeat Controls the name of the topic to which the connector sends heartbeat messages. The topic name has this pattern: topic.heartbeat.prefix . topic.prefix For example, if the topic prefix is fulfillment , the default topic name is __debezium-heartbeat.fulfillment . topic.transaction transaction Controls the name of the topic to which the connector sends transaction metadata messages. The topic name has this pattern: topic.prefix . topic.transaction For example, if the topic prefix is fulfillment , the default topic name is fulfillment.transaction . snapshot.max.threads 1 Specifies the number of threads that the connector uses when performing an initial snapshot. To enable parallel initial snapshots, set the property to a value greater than 1. In a parallel initial snapshot, the connector processes multiple tables concurrently. Important Parallel initial snapshots is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . errors.max.retries -1 The maximum number of retries on retriable errors (e.g. connection errors) before failing (-1 = no limit, 0 = disabled, > 0 = num of retries). Debezium Oracle connector database schema history configuration properties Debezium provides a set of schema.history.internal.* properties that control how the connector interacts with the schema history topic. The following table describes the schema.history.internal properties for configuring the Debezium connector. Table 7.17. Connector database schema history configuration properties Property Default Description schema.history.internal.kafka.topic No default The full name of the Kafka topic where the connector stores the database schema history. schema.history.internal.kafka.bootstrap.servers No default A list of host/port pairs that the connector uses for establishing an initial connection to the Kafka cluster. This connection is used for retrieving the database schema history previously stored by the connector, and for writing each DDL statement read from the source database. Each pair should point to the same Kafka cluster used by the Kafka Connect process. schema.history.internal.kafka.recovery.poll.interval.ms 100 An integer value that specifies the maximum number of milliseconds the connector should wait during startup/recovery while polling for persisted data. The default is 100ms. schema.history.internal.kafka.query.timeout.ms 3000 An integer value that specifies the maximum number of milliseconds the connector should wait while fetching cluster information using Kafka admin client. schema.history.internal.kafka.create.timeout.ms 30000 An integer value that specifies the maximum number of milliseconds the connector should wait while create kafka history topic using Kafka admin client. schema.history.internal.kafka.recovery.attempts 100 The maximum number of times that the connector should try to read persisted history data before the connector recovery fails with an error. The maximum amount of time to wait after receiving no data is recovery.attempts x recovery.poll.interval.ms . schema.history.internal.skip.unparseable.ddl false A Boolean value that specifies whether the connector should ignore malformed or unknown database statements or stop processing so a human can fix the issue. The safe default is false . Skipping should be used only with care as it can lead to data loss or mangling when the binlog is being processed. schema.history.internal.store.only.captured.tables.ddl false A Boolean value that specifies whether the connector records schema structures from all tables in a schema or database, or only from tables that are designated for capture. Specify one of the following values: false (default) During a database snapshot, the connector records the schema data for all non-system tables in the database, including tables that are not designated for capture. It's best to retain the default setting. If you later decide to capture changes from tables that you did not originally designate for capture, the connector can easily begin to capture data from those tables, because their schema structure is already stored in the schema history topic. Debezium requires the schema history of a table so that it can identify the structure that was present at the time that a change event occurred. true During a database snapshot, the connector records the table schemas only for the tables from which Debezium captures change events. If you change the default value, and you later configure the connector to capture data from other tables in the database, the connector lacks the schema information that it requires to capture change events from the tables. schema.history.internal.store.only.captured.databases.ddl false A Boolean value that specifies whether the connector records schema structures from all logical databases in the database instance. Specify one of the following values: true The connector records schema structures only for tables in the logical database and schema from which Debezium captures change events. false The connector records schema structures for all logical databases. Note The default value is true for MySQL Connector Pass-through database schema history properties for configuring producer and consumer clients Debezium relies on a Kafka producer to write schema changes to database schema history topics. Similarly, it relies on a Kafka consumer to read from database schema history topics when a connector starts. You define the configuration for the Kafka producer and consumer clients by assigning values to a set of pass-through configuration properties that begin with the schema.history.internal.producer.* and schema.history.internal.consumer.* prefixes. The pass-through producer and consumer database schema history properties control a range of behaviors, such as how these clients secure connections with the Kafka broker, as shown in the following example: Debezium strips the prefix from the property name before it passes the property to the Kafka client. See the Kafka documentation for more details about Kafka producer configuration properties and Kafka consumer configuration properties . Debezium connector Kafka signals configuration properties Debezium provides a set of signal.* properties that control how the connector interacts with the Kafka signals topic. The following table describes the Kafka signal properties. Table 7.18. Kafka signals configuration properties Property Default Description signal.kafka.topic <topic.prefix>-signal The name of the Kafka topic that the connector monitors for ad hoc signals. Note If automatic topic creation is disabled, you must manually create the required signaling topic. A signaling topic is required to preserve signal ordering. The signaling topic must have a single partition. signal.kafka.groupId kafka-signal The name of the group ID that is used by Kafka consumers. signal.kafka.bootstrap.servers No default A list of host/port pairs that the connector uses for establishing an initial connection to the Kafka cluster. Each pair references the Kafka cluster that is used by the Debezium Kafka Connect process. signal.kafka.poll.timeout.ms 100 An integer value that specifies the maximum number of milliseconds that the connector waits when polling signals. Debezium connector pass-through signals Kafka consumer client configuration properties The Debezium connector provides for pass-through configuration of the signals Kafka consumer. Pass-through signals properties begin with the prefix signals.consumer.* . For example, the connector passes properties such as signal.consumer.security.protocol=SSL to the Kafka consumer. Debezium strips the prefixes from the properties before it passes the properties to the Kafka signals consumer. Debezium connector sink notifications configuration properties The following table describes the notification properties. Table 7.19. Sink notification configuration properties Property Default Description notification.sink.topic.name No default The name of the topic that receives notifications from Debezium. This property is required when you configure the notification.enabled.channels property to include sink as one of the enabled notification channels. Debezium Oracle connector pass-through database driver configuration properties The Debezium connector provides for pass-through configuration of the database driver. Pass-through database properties begin with the prefix driver.* . For example, the connector passes properties such as driver.foobar=false to the JDBC URL. As is the case with the pass-through properties for database schema history clients , Debezium strips the prefixes from the properties before it passes them to the database driver. 7.7. Monitoring Debezium Oracle connector performance The Debezium Oracle connector provides three metric types in addition to the built-in support for JMX metrics that Apache Zookeeper, Apache Kafka, and Kafka Connect have. snapshot metrics ; for monitoring the connector when performing snapshots streaming metrics ; for monitoring the connector when processing change events schema history metrics ; for monitoring the status of the connector's schema history Please refer to the monitoring documentation for details of how to expose these metrics via JMX. 7.7.1. Debezium Oracle connector snapshot metrics The MBean is debezium.oracle:type=connector-metrics,context=snapshot,server= <topic.prefix> . Snapshot metrics are not exposed unless a snapshot operation is active, or if a snapshot has occurred since the last connector start. The following table lists the shapshot metrics that are available. Attributes Type Description LastEvent string The last snapshot event that the connector has read. MilliSecondsSinceLastEvent long The number of milliseconds since the connector has read and processed the most recent event. TotalNumberOfEventsSeen long The total number of events that this connector has seen since last started or reset. NumberOfEventsFiltered long The number of events that have been filtered by include/exclude list filtering rules configured on the connector. CapturedTables string[] The list of tables that are captured by the connector. QueueTotalCapacity int The length the queue used to pass events between the snapshotter and the main Kafka Connect loop. QueueRemainingCapacity int The free capacity of the queue used to pass events between the snapshotter and the main Kafka Connect loop. TotalTableCount int The total number of tables that are being included in the snapshot. RemainingTableCount int The number of tables that the snapshot has yet to copy. SnapshotRunning boolean Whether the snapshot was started. SnapshotPaused boolean Whether the snapshot was paused. SnapshotAborted boolean Whether the snapshot was aborted. SnapshotCompleted boolean Whether the snapshot completed. SnapshotDurationInSeconds long The total number of seconds that the snapshot has taken so far, even if not complete. Includes also time when snapshot was paused. SnapshotPausedDurationInSeconds long The total number of seconds that the snapshot was paused. If the snapshot was paused several times, the paused time adds up. RowsScanned Map<String, Long> Map containing the number of rows scanned for each table in the snapshot. Tables are incrementally added to the Map during processing. Updates every 10,000 rows scanned and upon completing a table. MaxQueueSizeInBytes long The maximum buffer of the queue in bytes. This metric is available if max.queue.size.in.bytes is set to a positive long value. CurrentQueueSizeInBytes long The current volume, in bytes, of records in the queue. The connector also provides the following additional snapshot metrics when an incremental snapshot is executed: Attributes Type Description ChunkId string The identifier of the current snapshot chunk. ChunkFrom string The lower bound of the primary key set defining the current chunk. ChunkTo string The upper bound of the primary key set defining the current chunk. TableFrom string The lower bound of the primary key set of the currently snapshotted table. TableTo string The upper bound of the primary key set of the currently snapshotted table. 7.7.2. Debezium Oracle connector streaming metrics The MBean is debezium.oracle:type=connector-metrics,context=streaming,server= <topic.prefix> . The following table lists the streaming metrics that are available. Attributes Type Description LastEvent string The last streaming event that the connector has read. MilliSecondsSinceLastEvent long The number of milliseconds since the connector has read and processed the most recent event. TotalNumberOfEventsSeen long The total number of events that this connector has seen since the last start or metrics reset. TotalNumberOfCreateEventsSeen long The total number of create events that this connector has seen since the last start or metrics reset. TotalNumberOfUpdateEventsSeen long The total number of update events that this connector has seen since the last start or metrics reset. TotalNumberOfDeleteEventsSeen long The total number of delete events that this connector has seen since the last start or metrics reset. NumberOfEventsFiltered long The number of events that have been filtered by include/exclude list filtering rules configured on the connector. CapturedTables string[] The list of tables that are captured by the connector. QueueTotalCapacity int The length the queue used to pass events between the streamer and the main Kafka Connect loop. QueueRemainingCapacity int The free capacity of the queue used to pass events between the streamer and the main Kafka Connect loop. Connected boolean Flag that denotes whether the connector is currently connected to the database server. MilliSecondsBehindSource long The number of milliseconds between the last change event's timestamp and the connector processing it. The values will incoporate any differences between the clocks on the machines where the database server and the connector are running. NumberOfCommittedTransactions long The number of processed transactions that were committed. SourceEventPosition Map<String, String> The coordinates of the last received event. LastTransactionId string Transaction identifier of the last processed transaction. MaxQueueSizeInBytes long The maximum buffer of the queue in bytes. This metric is available if max.queue.size.in.bytes is set to a positive long value. CurrentQueueSizeInBytes long The current volume, in bytes, of records in the queue. The Debezium Oracle connector also provides the following additional streaming metrics: Table 7.20. Descriptions of additional streaming metrics Attributes Type Description CurrentScn BigInteger The most recent system change number that has been processed. OldestScn BigInteger The oldest system change number in the transaction buffer. CommittedScn BigInteger The last committed system change number from the transaction buffer. OffsetScn BigInteger The system change number currently written to the connector's offsets. CurrentRedoLogFileName string[] Array of the log files that are currently mined. MinimumMinedLogCount long The minimum number of logs specified for any LogMiner session. MaximumMinedLogCount long The maximum number of logs specified for any LogMiner session. RedoLogStatus string[] Array of the current state for each mined logfile with the format filename | status . SwitchCounter int The number of times the database has performed a log switch for the last day. LastCapturedDmlCount long The number of DML operations observed in the last LogMiner session query. MaxCapturedDmlInBatch long The maximum number of DML operations observed while processing a single LogMiner session query. TotalCapturedDmlCount long The total number of DML operations observed. FetchingQueryCount long The total number of LogMiner session query (aka batches) performed. LastDurationOfFetchQueryInMilliseconds long The duration of the last LogMiner session query's fetch in milliseconds. MaxDurationOfFetchQueryInMilliseconds long The maximum duration of any LogMiner session query's fetch in milliseconds. LastBatchProcessingTimeInMilliseconds long The duration for processing the last LogMiner query batch results in milliseconds. TotalParseTimeInMilliseconds long The time in milliseconds spent parsing DML event SQL statements. LastMiningSessionStartTimeInMilliseconds long The duration in milliseconds to start the last LogMiner session. MaxMiningSessionStartTimeInMilliseconds long The longest duration in milliseconds to start a LogMiner session. TotalMiningSessionStartTimeInMilliseconds long The total duration in milliseconds spent by the connector starting LogMiner sessions. MinBatchProcessingTimeInMilliseconds long The minimum duration in milliseconds spent processing results from a single LogMiner session. MaxBatchProcessingTimeInMilliseconds long The maximum duration in milliseconds spent processing results from a single LogMiner session. TotalProcessingTimeInMilliseconds long The total duration in milliseconds spent processing results from LogMiner sessions. TotalResultSetNextTimeInMilliseconds long The total duration in milliseconds spent by the JDBC driver fetching the row to be processed from the log mining view. TotalProcessedRows long The total number of rows processed from the log mining view across all sessions. BatchSize int The number of entries fetched by the log mining query per database round-trip. MillisecondToSleepBetweenMiningQuery long The number of milliseconds the connector sleeps before fetching another batch of results from the log mining view. MaxBatchProcessingThroughput long The maximum number of rows/second processed from the log mining view. AverageBatchProcessingThroughput long The average number of rows/second processed from the log mining. LastBatchProcessingThroughput long The average number of rows/second processed from the log mining view for the last batch. NetworkConnectionProblemsCounter long The number of connection problems detected. HoursToKeepTransactionInBuffer int The number of hours that transactions are retained by the connector's in-memory buffer without being committed or rolled back before being discarded. For more information, see log.mining.transaction.retention.ms . NumberOfActiveTransactions long The number of current active transactions in the transaction buffer. NumberOfCommittedTransactions long The number of committed transactions in the transaction buffer. NumberOfRolledBackTransactions long The number of rolled back transactions in the transaction buffer. CommitThroughput long The average number of committed transactions per second in the transaction buffer. RegisteredDmlCount long The number of registered DML operations in the transaction buffer. LagFromSourceInMilliseconds long The time difference in milliseconds between when a change occurred in the transaction logs and when its added to the transaction buffer. MaxLagFromSourceInMilliseconds long The maximum time difference in milliseconds between when a change occurred in the transaction logs and when its added to the transaction buffer. MinLagFromSourceInMilliseconds long The minimum time difference in milliseconds between when a change occurred in the transaction logs and when its added to the transaction buffer. AbandonedTransactionIds string[] An array of the most recent abandoned transaction identifiers removed from the transaction buffer due to their age. See log.mining.transaction.retention.ms for details. RolledBackTransactionIds string[] An array of the most recent transaction identifiers that have been mined and rolled back in the transaction buffer. LastCommitDurationInMilliseconds long The duration of the last transaction buffer commit operation in milliseconds. MaxCommitDurationInMilliseconds long The duration of the longest transaction buffer commit operation in milliseconds. ErrorCount int The number of errors detected. WarningCount int The number of warnings detected. ScnFreezeCount int The number of times that the system change number was checked for advancement and remains unchanged. A high value can indicate that a long-running transactions is ongoing and is preventing the connector from flushing the most recently processed system change number to the connector's offsets. When conditions are optimal, the value should be close to or equal to 0 . UnparsableDdlCount int The number of DDL records that have been detected but could not be parsed by the DDL parser. This should always be 0 ; however when allowing unparsable DDL to be skipped, this metric can be used to determine if any warnings have been written to the connector logs. MiningSessionUserGlobalAreaMemoryInBytes long The current mining session's user global area (UGA) memory consumption in bytes. MiningSessionUserGlobalAreaMaxMemoryInBytes long The maximum mining session's user global area (UGA) memory consumption in bytes across all mining sessions. MiningSessionProcessGlobalAreaMemoryInBytes long The current mining session's process global area (PGA) memory consumption in bytes. MiningSessionProcessGlobalAreaMaxMemoryInBytes long The maximum mining session's process global area (PGA) memory consumption in bytes across all mining sessions. 7.7.3. Debezium Oracle connector schema history metrics The MBean is debezium.oracle:type=connector-metrics,context=schema-history,server= <topic.prefix> . The following table lists the schema history metrics that are available. Attributes Type Description Status string One of STOPPED , RECOVERING (recovering history from the storage), RUNNING describing the state of the database schema history. RecoveryStartTime long The time in epoch seconds at what recovery has started. ChangesRecovered long The number of changes that were read during recovery phase. ChangesApplied long the total number of schema changes applied during recovery and runtime. MilliSecondsSinceLast RecoveredChange long The number of milliseconds that elapsed since the last change was recovered from the history store. MilliSecondsSinceLast AppliedChange long The number of milliseconds that elapsed since the last change was applied. LastRecoveredChange string The string representation of the last change recovered from the history store. LastAppliedChange string The string representation of the last applied change. 7.8. Oracle connector frequently asked questions Is Oracle 11g supported? Oracle 11g is not supported; however, we do aim to be backward compatible with Oracle 11g on a best-effort basis. We rely on the community to communicate compatibility concerns with Oracle 11g as well as provide bug fixes when a regression is identified. Isn't Oracle LogMiner deprecated? No, Oracle only deprecated the continuous mining option with Oracle LogMiner in Oracle 12c and removed that option starting with Oracle 19c. The Debezium Oracle connector does not rely on this option to function, and therefore can safely be used with newer versions of Oracle without any impact. How do I change the position in the offsets? The Debezium Oracle connector maintains two critical values in the offsets, a field named scn and another named commit_scn . The scn field is a string that represents the low-watermark starting position the connector used when capturing changes. Find out the name of the topic that contains the connector offsets. This is configured based on the value set as the offset.storage.topic configuration property. Find out the last offset for the connector, the key under which it is stored and identify the partition used to store the offset. This can be done using the kafkacat utility script provided by the Kafka broker installation. An example might look like this: kafkacat -b localhost -C -t my_connect_offsets -f 'Partition(%p) %k %s\n' Partition(11) ["inventory-connector",{"server":"server1"}] {"scn":"324567897", "commit_scn":"324567897: 0x2832343233323:1"} The key for inventory-connector is ["inventory-connector",{"server":"server1"}] , the partition is 11 and the last offset is the contents that follows the key. To move back to a offset the connector should be stopped and the following command has to be issued: echo '["inventory-connector",{"server":"server1"}]|{"scn":"3245675000","commit_scn":"324567500"}' | \ kafkacat -P -b localhost -t my_connect_offsets -K \| -p 11 This writes to partition 11 of the my_connect_offsets topic the given key and offset value. In this example, we are reversing the connector back to SCN 3245675000 rather than 324567897 . What happens if the connector cannot find logs with a given offset SCN? The Debezium connector maintains a low and high -watermark SCN value in the connector offsets. The low-watermark SCN represents the starting position and must exist in the available online redo or archive logs in order for the connector to start successfully. When the connector reports it cannot find this offset SCN, this indicates that the logs that are still available do not contain the SCN and therefore the connector cannot mine changes from where it left off. When this happens, there are two options. The first is to remove the history topic and offsets for the connector and restart the connector, taking a new snapshot as suggested. This will guarantee that no data loss will occur for any topic consumers. The second is to manually manipulate the offsets, advancing the SCN to a position that is available in the redo or archive logs. This will cause changes that occurred between the old SCN value and the newly provided SCN value to be lost and not written to the topics. This is not recommended. What's the difference between the various mining strategies? The Debezium Oracle connector provides two options for log.mining.strategy . The default is redo_in_catalog , and this instructs the connector to write the Oracle data dictionary to the redo logs everytime a log switch is detected. This data dictionary is necessary for Oracle LogMiner to track schema changes effectively when parsing the redo and archive logs. This option will generate more than usual numbers of archive logs but allows tables being captured to be manipulated in real-time without any impact on capturing data changes. This option generally requires more Oracle database memory and will cause the Oracle LogMiner session and process to take slightly longer to start after each log switch. The alternative option, online_catalog , does not write the data dictionary to the redo logs. Instead, Oracle LogMiner will always use the online data dictionary that contains the current state of the table's structure. This also means that if a table's structure changes and no longer matches the online data dictionary, Oracle LogMiner will be unable to resolve table or column names if the table's structure is changed. This mining strategy option should not be used if the tables being captured are subject to frequent schema changes. It's important that all data changes be lock-stepped with the schema change such that all changes have been captured from the logs for the table, stop the connector, apply the schema change, and restart the connector and resume data changes on the table. This option requires less Oracle database memory and Oracle LogMiner sessions generally start substantially faster since the data dictionary does not need to be loaded or primed by the LogMiner process. Why does the connector appear to stop capturing changes on AWS? Due to the fixed idle timeout of 350 seconds on the AWS Gateway Load Balancer , JDBC calls that require more than 350 seconds to complete can hang indefinitely. In situations where calls to the Oracle LogMiner API take more than 350 seconds to complete, a timeout can be triggered, causing the AWS Gateway Load Balancer to hang. For example, such timeouts can occur when a LogMiner session that processes large amounts of data runs concurrently with Oracle's periodic checkpointing task. To prevent timeouts from occurring on the AWS Gateway Load Balancer, enable keep-alive packets from the Kafka Connect environment, by performing the following steps as root or a super-user: From a terminal, run the following command: sysctl -w net.ipv4.tcp_keepalive_time=60 Edit /etc/sysctl.conf and set the value of the following variable as shown: net.ipv4.tcp_keepalive_time=60 Reconfigure the Debezium for Oracle connector to use the database.url property rather than database.hostname and add the (ENABLE=broken) Oracle connect string descriptor as shown in the following example: database.url=jdbc:oracle:thin:username/password!@(DESCRIPTION=(ENABLE=broken)(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(Host=hostname)(Port=port)))(CONNECT_DATA=(SERVICE_NAME=serviceName))) The preceding steps configure the TCP network stack to send keep-alive packets every 60 seconds. As a result, the AWS Gateway Load Balancer does not timeout when JDBC calls to the LogMiner API take more than 350 seconds to complete, enabling the connector to continue to read changes from the database's transaction logs. What's the cause for ORA-01555 and how to handle it? The Debezium Oracle connector uses flashback queries when the initial snapshot phase executes. A flashback query is a special type of query that relies on the flashback area, maintained by the database's UNDO_RETENTION database parameter, to return the results of a query based on what the contents of the table had at a given time, or in our case at a given SCN. By default, Oracle generally only maintains an undo or flashback area for approximately 15 minutes unless this has been increased or decreased by your database administrator. For configurations that capture large tables, it may take longer than 15 minutes or your configured UNDO_RETENTION to perform the initial snapshot and this will eventually lead to this exception: The first way to deal with this exception is to work with your database administrator and see whether they can increase the UNDO_RETENTION database parameter temporarily. This does not require a restart of the Oracle database, so this can be done online without impacting database availability. However, changing this may still lead to the above exception or a "snapshot too old" exception if the tablespace has inadequate space to store the necessary undo data. The second way to deal with this exception is to not rely on the initial snapshot at all, setting the snapshot.mode to schema_only and then instead relying on incremental snapshots. An incremental snapshot does not rely on a flashback query and therefore isn't subject to ORA-01555 exceptions. What's the cause for ORA-04036 and how to handle it? The Debezium Oracle connector may report an ORA-04036 exception when the database changes occur infrequently. An Oracle LogMiner session is started and re-used until a log switch is detected. The session is re-used as it provides the optimal performance utilization with Oracle LogMiner, but should a long-running mining session occur, this can lead to excessive PGA memory usage, eventually causing an exception like this: This exception can be avoided by specifying how frequent Oracle switches redo logs or how long the Debezium Oracle connector is allowed to re-use the mining session. The Debezium Oracle connector provides a configuration option, log.mining.session.max.ms , which controls how long the current Oracle LogMiner session can be re-used for before being closed and a new session started. This allows the database resources to be kept in-check without exceeding the PGA memory allowed by the database. What's the cause for ORA-01882 and how to handle it? The Debezium Oracle connector may report the following exception when connecting to an Oracle database: This happens when the timezone information cannot be correctly resolved by the JDBC driver. In order to solve this driver related problem, the driver needs to be told to not resolve the timezone details using regions. This can be done by specifying a driver pass through property using driver.oracle.jdbc.timezoneAsRegion=false . What's the cause for ORA-25191 and how to handle it? The Debezium Oracle connector automatically ignores index-organized tables (IOT) as they are not supported by Oracle LogMiner. However, if an ORA-25191 exception is thrown, this could be due to a unique corner case for such a mapping and the additional rules may be necessary to exclude these automatically. An example of an ORA-25191 exception might look like this: If an ORA-25191 exception is thrown, please raise a Jira issue with the details about the table and it's mappings, related to other parent tables, etc. As a workaround, the include/exclude configuration options can be adjusted to prevent the connector from accessing such tables.
[ "INSERT INTO <signalTable> (id, type, data) VALUES ( '<id>' , '<snapshotType>' , '{\"data-collections\": [\" <tableName> \",\" <tableName> \"],\"type\":\" <snapshotType> \",\"additional-condition\":\" <additional-condition> \"}');", "INSERT INTO myschema.debezium_signal (id, type, data) 1 values ('ad-hoc-1', 2 'execute-snapshot', 3 '{\"data-collections\": [\"schema1.table1\", \"schema2.table2\"], 4 \"type\":\"incremental\"}, 5 \"additional-condition\":\"color=blue\"}'); 6", "SELECT * FROM <tableName> .", "SELECT * FROM <tableName> WHERE <additional-condition> .", "INSERT INTO <signalTable> (id, type, data) VALUES ( '<id>' , '<snapshotType>' , '{\"data-collections\": [\" <tableName> \",\" <tableName> \"],\"type\":\" <snapshotType> \",\"additional-condition\":\" <additional-condition> \"}');", "INSERT INTO myschema.debezium_signal (id, type, data) VALUES('ad-hoc-1', 'execute-snapshot', '{\"data-collections\": [\"schema1.products\"],\"type\":\"incremental\", \"additional-condition\":\"color=blue\"}');", "INSERT INTO myschema.debezium_signal (id, type, data) VALUES('ad-hoc-1', 'execute-snapshot', '{\"data-collections\": [\"schema1.products\"],\"type\":\"incremental\", \"additional-condition\":\"color=blue AND quantity>10\"}');", "{ \"before\":null, \"after\": { \"pk\":\"1\", \"value\":\"New data\" }, \"source\": { \"snapshot\":\"incremental\" 1 }, \"op\":\"r\", 2 \"ts_ms\":\"1620393591654\", \"transaction\":null }", "Key = `test_connector` Value = `{\"type\":\"execute-snapshot\",\"data\": {\"data-collections\": [\"schema1.table1\", \"schema1.table2\"], \"type\": \"INCREMENTAL\"}}`", "Key = `test_connector` Value = `{\"type\":\"execute-snapshot\",\"data\": {\"data-collections\": [\"schema1.products\"], \"type\": \"INCREMENTAL\", \"additional-condition\":\"color='blue'\"}}`", "Key = `test_connector` Value = `{\"type\":\"execute-snapshot\",\"data\": {\"data-collections\": [\"schema1.products\"], \"type\": \"INCREMENTAL\", \"additional-condition\":\"color='blue' AND brand='MyBrand'\"}}`", "INSERT INTO <signalTable> (id, type, data) values ( '<id>' , 'stop-snapshot', '{\"data-collections\": [\" <tableName> \",\" <tableName> \"],\"type\":\"incremental\"}');", "INSERT INTO myschema.debezium_signal (id, type, data) 1 values ('ad-hoc-1', 2 'stop-snapshot', 3 '{\"data-collections\": [\"schema1.table1\", \"schema2.table2\"], 4 \"type\":\"incremental\"}'); 5", "Key = `test_connector` Value = `{\"type\":\"stop-snapshot\",\"data\": {\"data-collections\": [\"schema1.table1\", \"schema1.table2\"], \"type\": \"INCREMENTAL\"}}`", "fulfillment.inventory.orders fulfillment.inventory.customers fulfillment.inventory.products", "{ \"schema\": { }, \"payload\": { \"source\": { \"version\": \"2.3.4.Final\", \"connector\": \"oracle\", \"name\": \"server1\", \"ts_ms\": 1588252618953, \"snapshot\": \"true\", \"db\": \"ORCLPDB1\", \"schema\": \"DEBEZIUM\", \"table\": \"CUSTOMERS\", \"txId\" : null, \"scn\" : \"1513734\", \"commit_scn\": \"1513754\", \"lcr_position\" : null, \"rs_id\": \"001234.00012345.0124\", \"ssn\": 1, \"redo_thread\": 1, \"user_name\": \"user\" }, \"ts_ms\": 1588252618953, 1 \"databaseName\": \"ORCLPDB1\", 2 \"schemaName\": \"DEBEZIUM\", // \"ddl\": \"CREATE TABLE \\\"DEBEZIUM\\\".\\\"CUSTOMERS\\\" \\n ( \\\"ID\\\" NUMBER(9,0) NOT NULL ENABLE, \\n \\\"FIRST_NAME\\\" VARCHAR2(255), \\n \\\"LAST_NAME\" VARCHAR2(255), \\n \\\"EMAIL\\\" VARCHAR2(255), \\n PRIMARY KEY (\\\"ID\\\") ENABLE, \\n SUPPLEMENTAL LOG DATA (ALL) COLUMNS\\n ) SEGMENT CREATION IMMEDIATE \\n PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 \\n NOCOMPRESS LOGGING\\n STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645\\n PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1\\n BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)\\n TABLESPACE \\\"USERS\\\" \", 3 \"tableChanges\": [ 4 { \"type\": \"CREATE\", 5 \"id\": \"\\\"ORCLPDB1\\\".\\\"DEBEZIUM\\\".\\\"CUSTOMERS\\\"\", 6 \"table\": { 7 \"defaultCharsetName\": null, \"primaryKeyColumnNames\": [ 8 \"ID\" ], \"columns\": [ 9 { \"name\": \"ID\", \"jdbcType\": 2, \"nativeType\": null, \"typeName\": \"NUMBER\", \"typeExpression\": \"NUMBER\", \"charsetName\": null, \"length\": 9, \"scale\": 0, \"position\": 1, \"optional\": false, \"autoIncremented\": false, \"generated\": false }, { \"name\": \"FIRST_NAME\", \"jdbcType\": 12, \"nativeType\": null, \"typeName\": \"VARCHAR2\", \"typeExpression\": \"VARCHAR2\", \"charsetName\": null, \"length\": 255, \"scale\": null, \"position\": 2, \"optional\": false, \"autoIncremented\": false, \"generated\": false }, { \"name\": \"LAST_NAME\", \"jdbcType\": 12, \"nativeType\": null, \"typeName\": \"VARCHAR2\", \"typeExpression\": \"VARCHAR2\", \"charsetName\": null, \"length\": 255, \"scale\": null, \"position\": 3, \"optional\": false, \"autoIncremented\": false, \"generated\": false }, { \"name\": \"EMAIL\", \"jdbcType\": 12, \"nativeType\": null, \"typeName\": \"VARCHAR2\", \"typeExpression\": \"VARCHAR2\", \"charsetName\": null, \"length\": 255, \"scale\": null, \"position\": 4, \"optional\": false, \"autoIncremented\": false, \"generated\": false } ], \"attributes\": [ 10 { \"customAttribute\": \"attributeValue\" } ] } } ] } }", "{ \"schema\": { \"type\": \"struct\", \"fields\": [ { \"type\": \"string\", \"optional\": false, \"field\": \"databaseName\" } ], \"optional\": false, \"name\": \"io.debezium.connector.oracle.SchemaChangeKey\" }, \"payload\": { \"databaseName\": \"ORCLPDB1\" } }", "{ \"status\": \"BEGIN\", \"id\": \"5.6.641\", \"ts_ms\": 1486500577125, \"event_count\": null, \"data_collections\": null } { \"status\": \"END\", \"id\": \"5.6.641\", \"ts_ms\": 1486500577691, \"event_count\": 2, \"data_collections\": [ { \"data_collection\": \"ORCLPDB1.DEBEZIUM.CUSTOMER\", \"event_count\": 1 }, { \"data_collection\": \"ORCLPDB1.DEBEZIUM.ORDER\", \"event_count\": 1 } ] }", "{ \"before\": null, \"after\": { \"pk\": \"2\", \"aa\": \"1\" }, \"source\": { }, \"op\": \"c\", \"ts_ms\": \"1580390884335\", \"transaction\": { \"id\": \"5.6.641\", \"total_order\": \"1\", \"data_collection_order\": \"1\" } }", "CREATE TABLE customers ( id NUMBER(9) GENERATED BY DEFAULT ON NULL AS IDENTITY (START WITH 1001) NOT NULL PRIMARY KEY, first_name VARCHAR2(255) NOT NULL, last_name VARCHAR2(255) NOT NULL, email VARCHAR2(255) NOT NULL UNIQUE );", "{ \"schema\": { \"type\": \"struct\", \"fields\": [ { \"type\": \"int32\", \"optional\": false, \"field\": \"ID\" } ], \"optional\": false, \"name\": \"server1.INVENTORY.CUSTOMERS.Key\" }, \"payload\": { \"ID\": 1004 } }", "{ \"schema\": { \"type\": \"struct\", \"fields\": [ { \"type\": \"struct\", \"fields\": [ { \"type\": \"int32\", \"optional\": false, \"field\": \"ID\" }, { \"type\": \"string\", \"optional\": false, \"field\": \"FIRST_NAME\" }, { \"type\": \"string\", \"optional\": false, \"field\": \"LAST_NAME\" }, { \"type\": \"string\", \"optional\": false, \"field\": \"EMAIL\" } ], \"optional\": true, \"name\": \"server1.DEBEZIUM.CUSTOMERS.Value\", \"field\": \"before\" }, { \"type\": \"struct\", \"fields\": [ { \"type\": \"int32\", \"optional\": false, \"field\": \"ID\" }, { \"type\": \"string\", \"optional\": false, \"field\": \"FIRST_NAME\" }, { \"type\": \"string\", \"optional\": false, \"field\": \"LAST_NAME\" }, { \"type\": \"string\", \"optional\": false, \"field\": \"EMAIL\" } ], \"optional\": true, \"name\": \"server1.DEBEZIUM.CUSTOMERS.Value\", \"field\": \"after\" }, { \"type\": \"struct\", \"fields\": [ { \"type\": \"string\", \"optional\": true, \"field\": \"version\" }, { \"type\": \"string\", \"optional\": false, \"field\": \"name\" }, { \"type\": \"int64\", \"optional\": true, \"field\": \"ts_ms\" }, { \"type\": \"string\", \"optional\": true, \"field\": \"txId\" }, { \"type\": \"string\", \"optional\": true, \"field\": \"scn\" }, { \"type\": \"string\", \"optional\": true, \"field\": \"commit_scn\" }, { \"type\": \"string\", \"optional\": true, \"field\": \"rs_id\" }, { \"type\": \"int64\", \"optional\": true, \"field\": \"ssn\" }, { \"type\": \"int32\", \"optional\": true, \"field\": \"redo_thread\" }, { \"type\": \"string\", \"optional\": true, \"field\": \"user_name\" }, { \"type\": \"boolean\", \"optional\": true, \"field\": \"snapshot\" } ], \"optional\": false, \"name\": \"io.debezium.connector.oracle.Source\", \"field\": \"source\" }, { \"type\": \"string\", \"optional\": false, \"field\": \"op\" }, { \"type\": \"int64\", \"optional\": true, \"field\": \"ts_ms\" } ], \"optional\": false, \"name\": \"server1.DEBEZIUM.CUSTOMERS.Envelope\" }, \"payload\": { \"before\": null, \"after\": { \"ID\": 1004, \"FIRST_NAME\": \"Anne\", \"LAST_NAME\": \"Kretchmar\", \"EMAIL\": \"[email protected]\" }, \"source\": { \"version\": \"2.3.4.Final\", \"name\": \"server1\", \"ts_ms\": 1520085154000, \"txId\": \"6.28.807\", \"scn\": \"2122185\", \"commit_scn\": \"2122185\", \"rs_id\": \"001234.00012345.0124\", \"ssn\": 1, \"redo_thread\": 1, \"user_name\": \"user\", \"snapshot\": false }, \"op\": \"c\", \"ts_ms\": 1532592105975 } }", "{ \"schema\": { ... }, \"payload\": { \"before\": { \"ID\": 1004, \"FIRST_NAME\": \"Anne\", \"LAST_NAME\": \"Kretchmar\", \"EMAIL\": \"[email protected]\" }, \"after\": { \"ID\": 1004, \"FIRST_NAME\": \"Anne\", \"LAST_NAME\": \"Kretchmar\", \"EMAIL\": \"[email protected]\" }, \"source\": { \"version\": \"2.3.4.Final\", \"name\": \"server1\", \"ts_ms\": 1520085811000, \"txId\": \"6.9.809\", \"scn\": \"2125544\", \"commit_scn\": \"2125544\", \"rs_id\": \"001234.00012345.0124\", \"ssn\": 1, \"redo_thread\": 1, \"user_name\": \"user\", \"snapshot\": false }, \"op\": \"u\", \"ts_ms\": 1532592713485 } }", "{ \"schema\": { ... }, \"payload\": { \"before\": { \"ID\": 1004, \"FIRST_NAME\": \"Anne\", \"LAST_NAME\": \"Kretchmar\", \"EMAIL\": \"[email protected]\" }, \"after\": null, \"source\": { \"version\": \"2.3.4.Final\", \"name\": \"server1\", \"ts_ms\": 1520085153000, \"txId\": \"6.28.807\", \"scn\": \"2122184\", \"commit_scn\": \"2122184\", \"rs_id\": \"001234.00012345.0124\", \"ssn\": 1, \"redo_thread\": 1, \"user_name\": \"user\", \"snapshot\": false }, \"op\": \"d\", \"ts_ms\": 1532592105960 } }", "{ \"schema\": { ... }, \"payload\": { \"before\": null, \"after\": null, \"source\": { 1 \"version\": \"2.3.4.Final\", \"connector\": \"oracle\", \"name\": \"oracle_server\", \"ts_ms\": 1638974535000, \"snapshot\": \"false\", \"db\": \"ORCLPDB1\", \"sequence\": null, \"schema\": \"DEBEZIUM\", \"table\": \"TEST_TABLE\", \"txId\": \"02000a0037030000\", \"scn\": \"13234397\", \"commit_scn\": \"13271102\", \"lcr_position\": null, \"rs_id\": \"001234.00012345.0124\", \"ssn\": 1, \"redo_thread\": 1, \"user_name\": \"user\" }, \"op\": \"t\", 2 \"ts_ms\": 1638974558961, 3 \"transaction\": null } }", "converters=zero_scale zero_scale.type=io.debezium.connector.oracle.converters.NumberToZeroScaleConverter zero_scale.decimal.mode=precise", "converters=boolean boolean.type=io.debezium.connector.oracle.converters.NumberOneToBooleanConverter boolean.selector=.*MYTABLE.FLAG,.*.IS_ARCHIVED", "ORACLE_SID=ORACLCDB dbz_oracle sqlplus /nolog CONNECT sys/top_secret AS SYSDBA alter system set db_recovery_file_dest_size = 10G; alter system set db_recovery_file_dest = '/opt/oracle/oradata/recovery_area' scope=spfile; shutdown immediate startup mount alter database archivelog; alter database open; -- Should now \"Database log mode: Archive Mode\" archive log list exit;", "SQL> SELECT LOG_MODE FROM VUSDDATABASE; LOG_MODE ------------ ARCHIVELOG", "exec rdsadmin.rdsadmin_util.set_configuration('archivelog retention hours',24); exec rdsadmin.rdsadmin_util.alter_supplemental_logging('ADD');", "ALTER TABLE inventory.customers ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;", "ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;", "sqlplus sys/top_secret@//localhost:1521/ORCLCDB as sysdba CREATE TABLESPACE logminer_tbs DATAFILE '/opt/oracle/oradata/ORCLCDB/logminer_tbs.dbf' SIZE 25M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED; exit; sqlplus sys/top_secret@//localhost:1521/ORCLPDB1 as sysdba CREATE TABLESPACE logminer_tbs DATAFILE '/opt/oracle/oradata/ORCLCDB/ORCLPDB1/logminer_tbs.dbf' SIZE 25M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED; exit; sqlplus sys/top_secret@//localhost:1521/ORCLCDB as sysdba CREATE USER c##dbzuser IDENTIFIED BY dbz DEFAULT TABLESPACE logminer_tbs QUOTA UNLIMITED ON logminer_tbs CONTAINER=ALL; GRANT CREATE SESSION TO c##dbzuser CONTAINER=ALL; 1 GRANT SET CONTAINER TO c##dbzuser CONTAINER=ALL; 2 GRANT SELECT ON V_USDDATABASE to c##dbzuser CONTAINER=ALL; 3 GRANT FLASHBACK ANY TABLE TO c##dbzuser CONTAINER=ALL; 4 GRANT SELECT ANY TABLE TO c##dbzuser CONTAINER=ALL; 5 GRANT SELECT_CATALOG_ROLE TO c##dbzuser CONTAINER=ALL; 6 GRANT EXECUTE_CATALOG_ROLE TO c##dbzuser CONTAINER=ALL; 7 GRANT SELECT ANY TRANSACTION TO c##dbzuser CONTAINER=ALL; 8 GRANT LOGMINING TO c##dbzuser CONTAINER=ALL; 9 GRANT CREATE TABLE TO c##dbzuser CONTAINER=ALL; 10 GRANT LOCK ANY TABLE TO c##dbzuser CONTAINER=ALL; 11 GRANT CREATE SEQUENCE TO c##dbzuser CONTAINER=ALL; 12 GRANT EXECUTE ON DBMS_LOGMNR TO c##dbzuser CONTAINER=ALL; 13 GRANT EXECUTE ON DBMS_LOGMNR_D TO c##dbzuser CONTAINER=ALL; 14 GRANT SELECT ON V_USDLOG TO c##dbzuser CONTAINER=ALL; 15 GRANT SELECT ON V_USDLOG_HISTORY TO c##dbzuser CONTAINER=ALL; 16 GRANT SELECT ON V_USDLOGMNR_LOGS TO c##dbzuser CONTAINER=ALL; 17 GRANT SELECT ON V_USDLOGMNR_CONTENTS TO c##dbzuser CONTAINER=ALL; 18 GRANT SELECT ON V_USDLOGMNR_PARAMETERS TO c##dbzuser CONTAINER=ALL; 19 GRANT SELECT ON V_USDLOGFILE TO c##dbzuser CONTAINER=ALL; 20 GRANT SELECT ON V_USDARCHIVED_LOG TO c##dbzuser CONTAINER=ALL; 21 GRANT SELECT ON V_USDARCHIVE_DEST_STATUS TO c##dbzuser CONTAINER=ALL; 22 GRANT SELECT ON V_USDTRANSACTION TO c##dbzuser CONTAINER=ALL; 23 GRANT SELECT ON V_USDMYSTAT TO c##dbzuser CONTAINER=ALL; 24 GRANT SELECT ON V_USDSTATNAME TO c##dbzuser CONTAINER=ALL; 25 exit;", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: debezium-kafka-connect-cluster annotations: strimzi.io/use-connector-resources: \"true\" 1 spec: version: 3.5.0 build: 2 output: 3 type: imagestream 4 image: debezium-streams-connect:latest plugins: 5 - name: debezium-connector-oracle artifacts: - type: zip 6 url: https://maven.repository.redhat.com/ga/io/debezium/debezium-connector-oracle/2.3.4.Final-redhat-00001/debezium-connector-oracle-2.3.4.Final-redhat-00001-plugin.zip 7 - type: zip url: https://maven.repository.redhat.com/ga/io/apicurio/apicurio-registry-distro-connect-converter/2.4.4.Final-redhat- <build-number> /apicurio-registry-distro-connect-converter-2.4.4.Final-redhat- <build-number> .zip 8 - type: zip url: https://maven.repository.redhat.com/ga/io/debezium/debezium-scripting/2.3.4.Final-redhat-00001/debezium-scripting-2.3.4.Final-redhat-00001.zip 9 - type: jar url: https://repo1.maven.org/maven2/org/codehaus/groovy/groovy/3.0.11/groovy-3.0.11.jar 10 - type: jar url: https://repo1.maven.org/maven2/org/codehaus/groovy/groovy-jsr223/3.0.11/groovy-jsr223-3.0.11.jar - type: jar url: https://repo1.maven.org/maven2/org/codehaus/groovy/groovy-json3.0.11/groovy-json-3.0.11.jar - type: jar 11 url: https://repo1.maven.org/maven2/com/oracle/database/jdbc/ojdbc8/21.6.0.0/ojdbc8-21.6.0.0.jar bootstrapServers: debezium-kafka-cluster-kafka-bootstrap:9093", "create -f dbz-connect.yaml", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: labels: strimzi.io/cluster: debezium-kafka-connect-cluster name: inventory-connector-oracle 1 spec: class: io.debezium.connector.oracle.OracleConnector 2 tasksMax: 1 3 config: 4 schema.history.internal.kafka.bootstrap.servers: debezium-kafka-cluster-kafka-bootstrap.debezium.svc.cluster.local:9092 schema.history.internal.kafka.topic: schema-changes.inventory database.hostname: oracle.debezium-oracle.svc.cluster.local 5 database.port: 1521 6 database.user: debezium 7 database.password: dbz 8 database.dbname: mydatabase 9 topic.prefix: inventory-connector-oracle 10 table.include.list: PUBLIC.INVENTORY 11", "create -n <namespace> -f <kafkaConnector> .yaml", "create -n debezium -f {context}-inventory-connector.yaml", "cat <<EOF >debezium-container-for-oracle.yaml 1 FROM registry.redhat.io/amq-streams-kafka-35-rhel8:2.5.0 USER root:root RUN mkdir -p /opt/kafka/plugins/debezium 2 RUN cd /opt/kafka/plugins/debezium/ && curl -O https://maven.repository.redhat.com/ga/io/debezium/debezium-connector-oracle/2.3.4.Final-redhat-00001/debezium-connector-oracle-2.3.4.Final-redhat-00001-plugin.zip && unzip debezium-connector-oracle-2.3.4.Final-redhat-00001-plugin.zip && rm debezium-connector-oracle-2.3.4.Final-redhat-00001-plugin.zip RUN cd /opt/kafka/plugins/debezium/ && curl -O https://repo1.maven.org/maven2/com/oracle/ojdbc/ojdbc8/21.1.0.0/ojdbc8-21.1.0.0.jar USER 1001 EOF", "build -t debezium-container-for-oracle:latest .", "docker build -t debezium-container-for-oracle:latest .", "push <myregistry.io> /debezium-container-for-oracle:latest", "docker push <myregistry.io> /debezium-container-for-oracle:latest", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster annotations: strimzi.io/use-connector-resources: \"true\" 1 spec: image: debezium-container-for-oracle 2", "create -f dbz-connect.yaml", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: inventory-connector-oracle 1 labels: strimzi.io/cluster: my-connect-cluster annotations: strimzi.io/use-connector-resources: 'true' spec: class: io.debezium.connector.oracle.OracleConnector 2 config: database.hostname: <oracle_ip_address> 3 database.port: 1521 4 database.user: c##dbzuser 5 database.password: dbz 6 database.dbname: ORCLCDB 7 database.pdb.name : ORCLPDB1, 8 topic.prefix: inventory-connector-oracle 9 schema.history.internal.kafka.bootstrap.servers: kafka:9092 10 schema.history.internal.kafka.topic: schema-changes.inventory 11", "apply -f inventory-connector.yaml", "describe KafkaConnector <connector-name> -n <project>", "describe KafkaConnector inventory-connector-oracle -n debezium", "Name: inventory-connector-oracle Namespace: debezium Labels: strimzi.io/cluster=debezium-kafka-connect-cluster Annotations: <none> API Version: kafka.strimzi.io/v1beta2 Kind: KafkaConnector Status: Conditions: Last Transition Time: 2021-12-08T17:41:34.897153Z Status: True Type: Ready Connector Status: Connector: State: RUNNING worker_id: 10.131.1.124:8083 Name: inventory-connector-oracle Tasks: Id: 0 State: RUNNING worker_id: 10.131.1.124:8083 Type: source Observed Generation: 1 Tasks Max: 1 Topics: inventory-connector-oracle.inventory inventory-connector-oracle.inventory.addresses inventory-connector-oracle.inventory.customers inventory-connector-oracle.inventory.geom inventory-connector-oracle.inventory.orders inventory-connector-oracle.inventory.products inventory-connector-oracle.inventory.products_on_hand Events: <none>", "get kafkatopics", "NAME CLUSTER PARTITIONS REPLICATION FACTOR READY connect-cluster-configs debezium-kafka-cluster 1 1 True connect-cluster-offsets debezium-kafka-cluster 25 1 True connect-cluster-status debezium-kafka-cluster 5 1 True consumer-offsets---84e7a678d08f4bd226872e5cdd4eb527fadc1c6a debezium-kafka-cluster 50 1 True inventory-connector-oracle--a96f69b23d6118ff415f772679da623fbbb99421 debezium-kafka-cluster 1 1 True inventory-connector-oracle.inventory.addresses---1b6beaf7b2eb57d177d92be90ca2b210c9a56480 debezium-kafka-cluster 1 1 True inventory-connector-oracle.inventory.customers---9931e04ec92ecc0924f4406af3fdace7545c483b debezium-kafka-cluster 1 1 True inventory-connector-oracle.inventory.geom---9f7e136091f071bf49ca59bf99e86c713ee58dd5 debezium-kafka-cluster 1 1 True inventory-connector-oracle.inventory.orders---ac5e98ac6a5d91e04d8ec0dc9078a1ece439081d debezium-kafka-cluster 1 1 True inventory-connector-oracle.inventory.products---df0746db116844cee2297fab611c21b56f82dcef debezium-kafka-cluster 1 1 True inventory-connector-oracle.inventory.products_on_hand---8649e0f17ffcc9212e266e31a7aeea4585e5c6b5 debezium-kafka-cluster 1 1 True schema-changes.inventory debezium-kafka-cluster 1 1 True strimzi-store-topic---effb8e3e057afce1ecf67c3f5d8e4e3ff177fc55 debezium-kafka-cluster 1 1 True strimzi-topic-operator-kstreams-topic-store-changelog---b75e702040b99be8a9263134de3507fc0cc4017b debezium-kafka-cluster 1 1 True", "exec -n <project> -it <kafka-cluster> -- /opt/kafka/bin/kafka-console-consumer.sh > --bootstrap-server localhost:9092 > --from-beginning > --property print.key=true > --topic= <topic-name >", "exec -n debezium -it debezium-kafka-cluster-kafka-0 -- /opt/kafka/bin/kafka-console-consumer.sh > --bootstrap-server localhost:9092 > --from-beginning > --property print.key=true > --topic=inventory-connector-oracle.inventory.products_on_hand", "{\"schema\":{\"type\":\"struct\",\"fields\":[{\"type\":\"int32\",\"optional\":false,\"field\":\"product_id\"}],\"optional\":false,\"name\":\"inventory-connector-oracle.inventory.products_on_hand.Key\"},\"payload\":{\"product_id\":101}} {\"schema\":{\"type\":\"struct\",\"fields\":[{\"type\":\"struct\",\"fields\":[{\"type\":\"int32\",\"optional\":false,\"field\":\"product_id\"},{\"type\":\"int32\",\"optional\":false,\"field\":\"quantity\"}],\"optional\":true,\"name\":\"inventory-connector-oracle.inventory.products_on_hand.Value\",\"field\":\"before\"},{\"type\":\"struct\",\"fields\":[{\"type\":\"int32\",\"optional\":false,\"field\":\"product_id\"},{\"type\":\"int32\",\"optional\":false,\"field\":\"quantity\"}],\"optional\":true,\"name\":\"inventory-connector-oracle.inventory.products_on_hand.Value\",\"field\":\"after\"},{\"type\":\"struct\",\"fields\":[{\"type\":\"string\",\"optional\":false,\"field\":\"version\"},{\"type\":\"string\",\"optional\":false,\"field\":\"connector\"},{\"type\":\"string\",\"optional\":false,\"field\":\"name\"},{\"type\":\"int64\",\"optional\":false,\"field\":\"ts_ms\"},{\"type\":\"string\",\"optional\":true,\"name\":\"io.debezium.data.Enum\",\"version\":1,\"parameters\":{\"allowed\":\"true,last,false\"},\"default\":\"false\",\"field\":\"snapshot\"},{\"type\":\"string\",\"optional\":false,\"field\":\"db\"},{\"type\":\"string\",\"optional\":true,\"field\":\"sequence\"},{\"type\":\"string\",\"optional\":true,\"field\":\"table\"},{\"type\":\"int64\",\"optional\":false,\"field\":\"server_id\"},{\"type\":\"string\",\"optional\":true,\"field\":\"gtid\"},{\"type\":\"string\",\"optional\":false,\"field\":\"file\"},{\"type\":\"int64\",\"optional\":false,\"field\":\"pos\"},{\"type\":\"int32\",\"optional\":false,\"field\":\"row\"},{\"type\":\"int64\",\"optional\":true,\"field\":\"thread\"},{\"type\":\"string\",\"optional\":true,\"field\":\"query\"}],\"optional\":false,\"name\":\"io.debezium.connector.oracle.Source\",\"field\":\"source\"},{\"type\":\"string\",\"optional\":false,\"field\":\"op\"},{\"type\":\"int64\",\"optional\":true,\"field\":\"ts_ms\"},{\"type\":\"struct\",\"fields\":[{\"type\":\"string\",\"optional\":false,\"field\":\"id\"},{\"type\":\"int64\",\"optional\":false,\"field\":\"total_order\"},{\"type\":\"int64\",\"optional\":false,\"field\":\"data_collection_order\"}],\"optional\":true,\"field\":\"transaction\"}],\"optional\":false,\"name\": \"inventory-connector-oracle.inventory.products_on_hand.Envelope\" }, \"payload\" :{ \"before\" : null , \"after\" :{ \"product_id\":101,\"quantity\":3 },\"source\":{\"version\":\"2.3.4.Final-redhat-00001\",\"connector\":\"oracle\",\"name\":\"inventory-connector-oracle\",\"ts_ms\":1638985247805,\"snapshot\":\"true\",\"db\":\"inventory\",\"sequence\":null,\"table\":\"products_on_hand\",\"server_id\":0,\"gtid\":null,\"file\":\"oracle-bin.000003\",\"pos\":156,\"row\":0,\"thread\":null,\"query\":null}, \"op\" : \"r\" ,\"ts_ms\":1638985247805,\"transaction\":null}}", "boolean.type: io.debezium.connector.oracle.converters.NumberOneToBooleanConverter", "boolean.selector: .*MYTABLE.FLAG,.*.IS_ARCHIVED", "\"snapshot.select.statement.overrides\": \"customer.orders\", \"snapshot.select.statement.overrides.customer.orders\": \"SELECT * FROM [customers].[orders] WHERE delete_flag = 0 ORDER BY id DESC\"", "column.mask.hash.SHA-256.with.salt.CzQMA0cB5K = inventory.orders.customerName, inventory.shipment.customerName", "database.port=1521 rac.nodes=192.168.1.100,192.168.1.101", "database.port=1521 rac.nodes=192.168.1.100,192.168.1.101:1522", "schema.history.internal.producer.security.protocol=SSL schema.history.internal.producer.ssl.keystore.location=/var/private/ssl/kafka.server.keystore.jks schema.history.internal.producer.ssl.keystore.password=test1234 schema.history.internal.producer.ssl.truststore.location=/var/private/ssl/kafka.server.truststore.jks schema.history.internal.producer.ssl.truststore.password=test1234 schema.history.internal.producer.ssl.key.password=test1234 schema.history.internal.consumer.security.protocol=SSL schema.history.internal.consumer.ssl.keystore.location=/var/private/ssl/kafka.server.keystore.jks schema.history.internal.consumer.ssl.keystore.password=test1234 schema.history.internal.consumer.ssl.truststore.location=/var/private/ssl/kafka.server.truststore.jks schema.history.internal.consumer.ssl.truststore.password=test1234 schema.history.internal.consumer.ssl.key.password=test1234", "kafkacat -b localhost -C -t my_connect_offsets -f 'Partition(%p) %k %s\\n' Partition(11) [\"inventory-connector\",{\"server\":\"server1\"}] {\"scn\":\"324567897\", \"commit_scn\":\"324567897: 0x2832343233323:1\"}", "echo '[\"inventory-connector\",{\"server\":\"server1\"}]|{\"scn\":\"3245675000\",\"commit_scn\":\"324567500\"}' | kafkacat -P -b localhost -t my_connect_offsets -K \\| -p 11", "sysctl -w net.ipv4.tcp_keepalive_time=60", "net.ipv4.tcp_keepalive_time=60", "database.url=jdbc:oracle:thin:username/password!@(DESCRIPTION=(ENABLE=broken)(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(Host=hostname)(Port=port)))(CONNECT_DATA=(SERVICE_NAME=serviceName)))", "ORA-01555: snapshot too old: rollback segment number 12345 with name \"_SYSSMU11_1234567890USD\" too small", "ORA-04036: PGA memory used by the instance exceeds PGA_AGGREGATE_LIMIT", "ORA-01882: timezone region not found", "ORA-25191: cannot reference overflow table of an index-organized table" ]
https://docs.redhat.com/en/documentation/red_hat_integration/2023.q4/html/debezium_user_guide/debezium-connector-for-oracle
Chapter 16. Uninstalling Certificate System subsystems
Chapter 16. Uninstalling Certificate System subsystems It is possible to remove individual subsystems or to uninstall all packages associated with an entire subsystem. Subsystems are installed and uninstalled individually. For example, it is possible to uninstall a KRA subsystem while leaving an installed and configured CA subsystem. It is also possible to remove a single CA subsystem while leaving other CA subsystems on the machine. 16.1. Removing a subsystem Removing a subsystem requires specifying the subsystem type and the name of the server in which the subsystem is running. This command removes all files associated with the subsystem (without removing the subsystem packages). The -s option specifies the subsystem to be removed (such as CA, KRA, OCSP, TKS, or TPS). The -i option specifies the instance name, such as pki-tomcat . For example, to remove a CA subsystem: The pkidestroy utility removes the subsystem and any related files, such as the certificate databases, certificates, keys, and associated users. It does not uninstall the subsystem packages. If the subsystem is the last subsystem on the server instance, the server instance is removed as well. 16.2. Removing Certificate System subsystem packages A number of subsystem-related packages and dependencies are installed with Red Hat Certificate System; these are listed in Section 6.9, "Installing RHCS and RHDS packages" . Removing a subsystem removes only the files and directories associated with that specific subsystem. It does not remove the actual installed packages that are used by that instance. Completely uninstalling Red Hat Certificate System or one of its subsystems requires using package management tools, like yum , to remove each package individually. To uninstall an individual Certificate System subsystem package: Remove all the associated subsystems. For example: Run the uninstall utility. For example: The subsystem type can be ca , kra , ocsp , tks , or tps . To remove other packages and dependencies, remove the packages specifically, using yum . The complete list of installed packages is at Section 6.9, "Installing RHCS and RHDS packages" .
[ "pkidestroy -s subsystem_type -i instance_name", "pkidestroy -s CA -i pki-tomcat Loading deployment configuration from /var/lib/pki/pki-tomcat/ca/registry/ca/deployment.cfg. Uninstalling CA from /var/lib/pki/pki-tomcat. Removed symlink /etc/systemd/system/multi-user.target.wants/pki-tomcatd.target. Uninstallation complete.", "pkidestroy -s CA -i pki-tomcat", "yum remove pki-subsystem_type" ]
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide_common_criteria_edition/uninstalling_certificate_system_subsystems
17.13. Dynamically Changing a Host Physical Machine or a Network Bridge that is Attached to a Virtual NIC
17.13. Dynamically Changing a Host Physical Machine or a Network Bridge that is Attached to a Virtual NIC This section demonstrates how to move the vNIC of a guest virtual machine from one bridge to another while the guest virtual machine is running without compromising the guest virtual machine Prepare guest virtual machine with a configuration similar to the following: Prepare an XML file for interface update: Start the guest virtual machine, confirm the guest virtual machine's network functionality, and check that the guest virtual machine's vnetX is connected to the bridge you indicated. Update the guest virtual machine's network with the new interface parameters with the following command: On the guest virtual machine, run service network restart . The guest virtual machine gets a new IP address for virbr1. Check the guest virtual machine's vnet0 is connected to the new bridge(virbr1)
[ "<interface type='bridge'> <mac address='52:54:00:4a:c9:5e'/> <source bridge='virbr0'/> <model type='virtio'/> </interface>", "cat br1.xml", "<interface type='bridge'> <mac address='52:54:00:4a:c9:5e'/> <source bridge='virbr1'/> <model type='virtio'/> </interface>", "brctl show bridge name bridge id STP enabled interfaces virbr0 8000.5254007da9f2 yes virbr0-nic vnet0 virbr1 8000.525400682996 yes virbr1-nic", "virsh update-device test1 br1.xml Device updated successfully", "brctl show bridge name bridge id STP enabled interfaces virbr0 8000.5254007da9f2 yes virbr0-nic virbr1 8000.525400682996 yes virbr1-nic vnet0" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-virtual_networking-dynamically_changing_a_host_physical_machine_or_a_network_bridge_that_is_attached_to_a_virtual_nic
Chapter 4. View OpenShift Data Foundation Topology
Chapter 4. View OpenShift Data Foundation Topology The topology shows the mapped visualization of the OpenShift Data Foundation storage cluster at various abstraction levels and also lets you to interact with these layers. The view also shows how the various elements compose the Storage cluster altogether. Procedure On the OpenShift Web Console, navigate to Storage Data Foundation Topology . The view shows the storage cluster and the zones inside it. You can see the nodes depicted by circular entities within the zones, which are indicated by dotted lines. The label of each item or resource contains basic information such as status and health or indication for alerts. Choose a node to view node details on the right-hand panel. You can also access resources or deployments within a node by clicking on the search/preview decorator icon. To view deployment details Click the preview decorator on a node. A modal window appears above the node that displays all of the deployments associated with that node along with their statuses. Click the Back to main view button in the model's upper left corner to close and return to the view. Select a specific deployment to see more information about it. All relevant data is shown in the side panel. Click the Resources tab to view the pods information. This tab provides a deeper understanding of the problems and offers granularity that aids in better troubleshooting. Click the pod links to view the pod information page on OpenShift Container Platform. The link opens in a new window.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/deploying_openshift_data_foundation_on_any_platform/viewing-odf-topology_mcg-verify
Chapter 4. Known issues
Chapter 4. Known issues Resolved known issues for this release of Red Hat Trusted Profile Analyzer (RHTPA): Vulnerability count mismatch SBOM data does not load properly when uploading a large SBOM A list of known issues found in this release: Value inconsistencies between the SBOM bar chart and the pie chart The Software Bill of Materials (SBOM) documents listed on the bar chart have different values than the pie chart on the RHTPA home page. There is currently no workaround for this issue, and will be fixed in a later release. The spog-ui-pod-service pod restarts when launching the Trusted Profile Analyzer console in a web browser When running Red Hat Trusted Profile Analyzer (RHTPA) on Red Hat Enterprise Linux (RHEL), the spog-ui-pod-service pod restarts when first launching Trusted Profile Analyzer console in a web browser causing the application to be unresponsive. To workaround this issue, you can try refreshing the web page or closing the browser tab and reopening the RHTPA console in a new tab. Doing this loads the RHTPA console successfully. The collector-osv gives a GraphQL error When the collector-osv sends data to the Graph for Understanding Artifact Composition (GUAC) API without complying to the GraphQL GUAC schema, the default values are not applied for some optional fields, for example, a namespace for a package. GUAC returns the following error message: pq: insert or update on table package_versions violates foreign key constraint package_versions_package_names_versions . This causes the ingestion of OpenSource Vulnerability (OSV) data to fail, and as a consequence some packages could have fewer vulnerabilities reported than expected. Currently there is no workaround for this issue. Inconsistencies between the total number of CVEs displayed on the dashboard and the CVE tab The total number of Common Vulnerabilities and Exposures (CVE) uses different filters between the RHTPA home page dashboard and the CVE tab on the search results page, causing the discrepancy between the two values. Currently, there is no workaround for this known issue. Data migration fails when upgrading from Trusted Profile Analyzer 1.1.2 to 1.2 The bombastic and vexation collector pods crash when there is no space left on the persistent volume claim (PVC) for the PostgreSQL instance. To workaround this potential issue, increase the size of the PVC by 10 GB. An API error on the package details page In the RHTPA console, when navigating from the Vulnerabilities page to the package details page, clicking the affected dependencies link gives you the following error message: API error: Error contacting GUAC (Guac) - Client error: Cannot find an SBOM for PackageUrl Currently, there is no workaround for this known issue. Package version mismatch between the API response and the HTML report for Red Hat Dependency Analytics Opening a manifest file for analysis in Visual Studio Code or IntelliJ, can give you a different package version number between the Red Hat Dependency Analytics (RHDA) HTML report and an API client response. Before analyzing the manifest file, the API client compares package versions in the manifest file to the installed package versions within the client's environment. When there is a difference in package version, you receive an error message containing the first package version mismatch. To workaround this issue, you can disable the Match Manifest Versions option of RHDA extension in your integrated development environment (IDE).
null
https://docs.redhat.com/en/documentation/red_hat_trusted_profile_analyzer/1/html/release_notes/known-issues
Chapter 8. Managing projects
Chapter 8. Managing projects A Project is a logical collection of Ansible playbooks, represented in automation controller. You can manage playbooks and playbook directories different ways: By placing them manually under the Project Base Path on your automation controller server. By placing your playbooks into a source code management (SCM) system supported by the automation controller. These include Git, Subversion, and Mercurial. Note This Getting Started Guide uses lightweight examples to get you up and running. But for production purposes, you must use source control to manage your playbooks. The best practice is to treat your infrastructure as code which is in line with DevOps ideals. 8.1. Setting up a project Automation controller simplifies the startup process by providing you with a Demo Project that you can work with initially. Procedure To review existing projects, select Resources Projects from the navigation panel. Click Demo Project to view its details. 8.2. Editing a project As part of the initial setup you can leave the default Demo Project as it is. You can edit it later. Procedure Open the project to edit it by using one of these methods: Go to the project Details page and click Edit . From the navigation panel, select Resources Projects . Click Edit to the project name and edit the appropriate details. Save your changes 8.3. Syncing a project If you want to fetch the latest changes in a project, you can manually start an SCM sync for this project. Procedure Open the project to update the SCM-based demo project by using one of these methods: Go to the project Details page and click Sync . From the navigation panel, select Resources Projects and click Sync Project . Note When you add a project set up to use source control, a "sync" starts. This fetches the project details from the configured source control.
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/getting_started_with_automation_controller/controller-projects
Chapter 6. Workspaces
Chapter 6. Workspaces Use workspaces to select specific systems and group them together. You can view and manage the individual workspaces and the system membership of each group. In addition, you can filter your system lists across applications by workspaces. You can also manage user access to specific workspaces to enhance security. Workspaces have the following characteristics: Workspaces are only for systems. You cannot add workspaces as children of another workspace. Each system can belong to only one workspace. Using workspaces is not mandatory; systems that are not assigned to specific workspaces can remain unassigned. Additional resources For more information about user access, refer to User Access Guide for Role-based Access Control (RBAC) . For more information about user access to workspaces, refer to User access for RBAC in system inventory . 6.1. Creating Workspaces Prerequisites You must be an Organization administrator (member of the Default administrator access group) or have the Workspace administrator role. Procedure On the Red Hat Hybrid Cloud Console, navigate to Inventory . Click the Inventory drop-down menu and select Workspaces . Click Create workspace . The Create workspace dialog box displays. Type a name for the workspace in the Workspace name field. Names can consist of lowercase letters, numbers, spaces, hyphens (-), and underscores (_). Click Create . A Workspace created message displays, and the new workspace appears in the list of workspaces. 6.2. Adding systems to a newly created workspace Note Each system can belong to only one workspace. In the current release of Workspaces, a system cannot be reassigned to a different workspace in a single step. You must first remove the system from its current workspace, and then assign it to a new workspace. Prerequisites Organization Administrator access to Insights for Red Hat Enterprise Linux, or Workspaces administrator permissions to the group, or both inventory:groups:write and inventory:groups:read permissions to the group Procedure On the Red Hat Hybrid Cloud Console, navigate to Inventory . Select Workspaces . Click the name of the group to which you want to add systems. A page for Workspaces displays with the name of the workspace and two tabs, Systems and Group Details . On the Systems tab, click Add systems . The Add systems dialog box displays and shows the systems available for you to view in inventory. Select the systems you want to add to the workspace. Note If you select a system that already belongs to another workspace, a warning message displays: One or more of the selected systems already belong to a workspace. Make sure that all the systems you have selected are ungrouped, or you will not be able to proceed. When you have finished selecting systems, click Add systems . The Workspaces page displays and includes the systems you added to the workspace. 6.2.1. Adding a system and creating a workspace from the Inventory systems page Prerequisites Organization administrator access to Insights for Red Hat Enterprise Linux, or Workspace administrator permissions to the group, or both inventory:groups:write and inventory:groups:read permissions to the group Procedure On the Red Hat Hybrid Cloud Console, navigate to Inventory . The list of systems in your inventory appears. Locate the system that you want to add. Click the More options icon (...) on the far right side of the system listing. Select Add to workspace from the pop-up menu. The Add to workspace dialog box displays. Click Create a new workspace . The Create workspace dialog box displays. Type a name for the new group in the Name field and click Create . The Inventory page appears and displays a status (success or failure) message. 6.3. Removing systems from the workspace You can remove systems from the workspace from two pages in the Red Hat Hybrid Cloud Console: the Workspaces page and the Systems page. 6.3.1. Removing systems from the workspace using the Workspaces page Prerequisites You must be an Organization administrator (member of the Default admin access group), or have the Workspace administrator role, or have the inventory:group:write permissions for that particular workspace. Procedure On the Red Hat Hybrid Cloud Console, navigate to Inventory . Click the Inventory drop-down menu and select Workspaces . The Workspaces page displays. Select the workspace that contains the systems that you want to remove. Locate the system that you want to remove from the workspace. Click the More options icon (...) on the far right side of the system listing. Select Remove from workspace from the pop-up menu. The Remove from workspace? dialog box displays. Optional: To remove multiple systems from the workspace at once, select each system you want to remove, and then select Remove from workspace from the More options menu (the options icon (...)) in the toolbar. Click Remove . The Workspace page displays and shows the updated workspace with a status (success or failure) message. 6.3.2. Removing systems from the workspace using the Systems page Prerequisites Organization administrator access to Insights for Red Hat Enterprise Linux, or Workspace administrator permissions to the workspace, or both inventory:groups:write and inventory:groups:read permissions to the workspace Procedure On the Red Hat Hybrid Cloud Console, navigate to Inventory . Click the Inventory drop-down menu and select Systems . The Systems page displays. Locate the system that you want to remove from the workspace. Click the More options icon (...) on the far right side of the system listing. Select Remove from workspace from the pop-up menu. The Remove from workspace? dialog box displays. Note If any of the systems you selected do not belong to any workspace, the Remove from workspace option remains disabled. Make sure that you select only systems that belong to the workspace. Optional: To remove multiple systems from the workspace, select each system you want to remove, and then select Remove from workspace from the More options (the options icon (...)) menu. Click Remove . The Systems page displays and shows a status (success or failure) message. 6.4. Renaming the workspace Prerequisites You must be an Organization administrator (member of the Default Administrator access group), or have the Workspace administrator role, or have the inventory:group:write permissions for that particular workspace. Procedure On the Red Hat Hybrid Cloud Console, navigate to Inventory . Click the Inventory drop-down menu and select Workspaces . The Workspaces page displays. Click the Workspace actions drop-down menu in the upper right corner of the Workspaces page. Select Rename from the drop-down menu. The Rename workspace dialog box displays. Type the new name into the Name field, and click Save . The Workspaces page shows the renamed workspace in the list of workspaces. 6.5. Deleting the workspace Note Before you delete a workspace, make sure that the workspace does not contain any systems. You can only delete empty workspaces. If you attempt to delete a workspace that still contains systems, Insights returns a warning message. Prerequisites You must be an Organization administrator (member of the Default admin access group), or have the Workspace administrator role, or have the inventory:group:write permissions for that particular workspace. Procedure On the Red Hat Hybrid Cloud Console, navigate to Inventory . Click the Inventory drop-down menu and select Workspaces . The Workspaces page displays. Click the options icon (...) on the far right side of the listing for the group you want to delete. Select Delete from the pop-up menu. The Delete workspace dialog box displays. Select the checkbox to acknowledge that the delete operation cannot be undone. Click Delete . The Workspaces page shows an updated list of Workspaces and a status (success or failure) message. Note You can also delete the workspace from within the page for the workspace itself. Navigate to the Workspace and click the Actions drop-down menu, and then select Delete .
null
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/viewing_and_managing_system_inventory_with_fedramp/deploying-insights-with-rhca_user-access
Chapter 4. Types of certification workflow
Chapter 4. Types of certification workflow 4.1. Single Instance Type certification A Single Instance Type Certification consists of bootable, installable, and operableable collection of physical or virtual hardware features, as defined by a specification provided by a Partner. The specification may define features as standard, or optional. The Instance Type is considered to provide all of the features from the complete collection of standard and optional components unless explicitly excluded by or from the specification. 4.2. SuperSet Instance Type certification The SuperSet Instance Type Certification covers a variety of different configurations of the same Instance Type. The SuperSet Instance Type is known by a unique name within a naming convention. Example compute_slow, compute_medium, compute_fast The certification is conducted as per the basic process where all of the configurations are reviewed. The test plan will consider multiple configurations in order to increase the testing and processing efficiency without creating risks. The certification publication in the catalog will be a combined entry where the sizes covered by the certification will be displayed on the Instance Type certification entry. 4.3. Supplemental certification A supplemental certification allows a certified Instance Type to extend or alter the configuration of the Instance Type. 4.4. Pass-Through Instance Type certification A Pass-Through Instance Type Certification refers to the ability of a third party system or component to be granted the same certification as Instance Type previously certified by the Original Provider. The Original Provider can extend a certification granted to their instance type to another Partner's Instance Type where the original provider: Has permission from the third party Has the mechanics to ensure the third party does not alter the hardware in such a way that it would no longer be considered a subset of the original model certified by Red Hat Extends their responsibilities of support and representative Instance Type to include situations involving the third party Instance Type The third party however cannot extend their pass-through certification to another Partner. Important Both Partners are required to be members of the CCSP program; only the original provider may request Pass-Through certifications. Note You may also utilize the pass-through process where the same Instance Type is available with multiple names.
null
https://docs.redhat.com/en/documentation/red_hat_certified_cloud_and_service_provider_certification/2025/html/red_hat_cloud_instance_type_policy_guide/assembly_certification-workflow-types_cloud-instance-pol-certification-lifecycle
Chapter 10. Enabling members of a group to back up Directory Server and performing the backup as one of the group members
Chapter 10. Enabling members of a group to back up Directory Server and performing the backup as one of the group members You can configure that members of a group have permissions to back up an instance and perform the backup. This increases the security because you no longer need to set the credentials of cn=Directory Manager in your backup script or cron jobs. Additionally, you can easily grant and revoke the backup permissions by modifying the group. 10.1. Enabling a group to back up Directory Server Use this procedure to add the cn=backup_users,ou=groups,dc=example,dc=com group and enable members of this group to create backup tasks. Prerequisites The entry ou=groups,dc=example,dc=com exists in the database. Procedure Create the cn=backup_users,ou=groups,dc=example,dc=com group: # dsidm -D "cn=Directory manager" ldap://server.example.com -b " dc=example,dc=com " group create --cn backup_users Add an access control instruction (ACI) that allows members of the cn=backup_users,ou=groups,dc=example,dc=com group to create backup tasks: # ldapadd -D "cn=Directory Manager" -W -H ldap://server.example.com dn: cn=config changetype: modify add: aci aci: (target = " ldap:///cn=backup,cn=tasks,cn=config ")(targetattr="*") (version 3.0 ; acl " permission: Allow backup_users group to create backup tasks " ; allow (add, read, search) groupdn = " ldap:///cn=backup_users,ou=groups,dc=example,dc=com ";) - add: aci aci: (target = "ldap:///cn=config")(targetattr = "nsslapd-bakdir || objectClass") (version 3.0 ; acl " permission: Allow backup_users group to access bakdir attribute " ; allow (read,search) groupdn = " ldap:///cn=backup_users,ou=groups,dc=example,dc=com ";) Create a user: Create a user account: # dsidm -D "cn=Directory manager" ldap://server.example.com -b " dc=example,dc=com " user create --uid=" example " --cn=" example " --uidNumber=" 1000 " --gidNumber=" 1000 " --homeDirectory=" /home/example/ " --displayName=" Example User " Set a password on the user account: # dsidm -D "cn=Directory manager" ldap://server.example.com -b " dc=example,dc=com " account reset_password " uid=example,ou=People,dc=example,dc=com " " password " Add the uid=example,ou=People,dc=example,dc=com user to the cn=backup_users,ou=groups,dc=example,dc=com group: # dsidm -D "cn=Directory manager" ldap://server.example.com -b " dc=example,dc=com " group add_member backup_users uid=example,ou=People,dc=example,dc=com Verification Display the ACIs set on the cn=config entry: # ldapsearch -o ldif-wrap=no -LLLx -D "cn=directory manager" -W -H ldap://server.example.com -b cn=config aci=* aci -s base dn: cn=config aci: (target = "ldap:///cn=backup,cn=tasks,cn=config")(targetattr="*")(version 3.0 ; acl "permission: Allow backup_users group to create backup tasks" ; allow (add, read, search) groupdn = "ldap:///cn=backup_users,ou=groups,dc=example,dc=com";) aci: (target = "ldap:///cn=config")(targetattr = "nsslapd-bakdir || objectClass")(version 3.0 ; acl "permission: Allow backup_users group to access bakdir attribute" ; allow (read,search) groupdn = "ldap:///cn=backup_users,ou=groups,dc=example,dc=com";) ... 10.2. Performing a backup as a regular user You can perform backups as a regular user instead of cn=Directory Manager . Prerequisites You enabled members of the cn=backup_users,ou=groups,dc=example,dc=com group to perform backups. The user you use to perform the backup is a member of the cn=backup_users,ou=groups,dc=example,dc=com group. Procedure Create a backup task using one of the following methods: Using the dsconf backup create command: # dsconf -D " uid=example,ou=People,dc=example,dc=com " ldap://server.example.com backup create By manually creating the task: # ldapadd -D " uid=example,ou=People,dc=example,dc=com " -W -H ldap://server.example.com dn: cn= backup-2021_07_23_12:55_00 ,cn=backup,cn=tasks,cn=config changetype: add objectClass: extensibleObject nsarchivedir: /var/lib/dirsrv/slapd-instance_name/bak/backup-2021_07_23_12:55_00 nsdatabasetype: ldbm database cn: backup-2021_07_23_12:55_00 Verification Verify that the backup was created: # ls -l /var/lib/dirsrv/slapd-instance_name/bak/ total 0 drwx------. 3 dirsrv dirsrv 108 Jul 23 12:55 backup-2021_07_23_12_55_00 ... Additional resources Enabling a group to back up Directory Server
[ "dsidm -D \"cn=Directory manager\" ldap://server.example.com -b \" dc=example,dc=com \" group create --cn backup_users", "ldapadd -D \"cn=Directory Manager\" -W -H ldap://server.example.com dn: cn=config changetype: modify add: aci aci: (target = \" ldap:///cn=backup,cn=tasks,cn=config \")(targetattr=\"*\") (version 3.0 ; acl \" permission: Allow backup_users group to create backup tasks \" ; allow (add, read, search) groupdn = \" ldap:///cn=backup_users,ou=groups,dc=example,dc=com \";) - add: aci aci: (target = \"ldap:///cn=config\")(targetattr = \"nsslapd-bakdir || objectClass\") (version 3.0 ; acl \" permission: Allow backup_users group to access bakdir attribute \" ; allow (read,search) groupdn = \" ldap:///cn=backup_users,ou=groups,dc=example,dc=com \";)", "dsidm -D \"cn=Directory manager\" ldap://server.example.com -b \" dc=example,dc=com \" user create --uid=\" example \" --cn=\" example \" --uidNumber=\" 1000 \" --gidNumber=\" 1000 \" --homeDirectory=\" /home/example/ \" --displayName=\" Example User \"", "dsidm -D \"cn=Directory manager\" ldap://server.example.com -b \" dc=example,dc=com \" account reset_password \" uid=example,ou=People,dc=example,dc=com \" \" password \"", "dsidm -D \"cn=Directory manager\" ldap://server.example.com -b \" dc=example,dc=com \" group add_member backup_users uid=example,ou=People,dc=example,dc=com", "ldapsearch -o ldif-wrap=no -LLLx -D \"cn=directory manager\" -W -H ldap://server.example.com -b cn=config aci=* aci -s base dn: cn=config aci: (target = \"ldap:///cn=backup,cn=tasks,cn=config\")(targetattr=\"*\")(version 3.0 ; acl \"permission: Allow backup_users group to create backup tasks\" ; allow (add, read, search) groupdn = \"ldap:///cn=backup_users,ou=groups,dc=example,dc=com\";) aci: (target = \"ldap:///cn=config\")(targetattr = \"nsslapd-bakdir || objectClass\")(version 3.0 ; acl \"permission: Allow backup_users group to access bakdir attribute\" ; allow (read,search) groupdn = \"ldap:///cn=backup_users,ou=groups,dc=example,dc=com\";)", "dsconf -D \" uid=example,ou=People,dc=example,dc=com \" ldap://server.example.com backup create", "ldapadd -D \" uid=example,ou=People,dc=example,dc=com \" -W -H ldap://server.example.com dn: cn= backup-2021_07_23_12:55_00 ,cn=backup,cn=tasks,cn=config changetype: add objectClass: extensibleObject nsarchivedir: /var/lib/dirsrv/slapd-instance_name/bak/backup-2021_07_23_12:55_00 nsdatabasetype: ldbm database cn: backup-2021_07_23_12:55_00", "ls -l /var/lib/dirsrv/slapd-instance_name/bak/ total 0 drwx------. 3 dirsrv dirsrv 108 Jul 23 12:55 backup-2021_07_23_12_55_00" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/securing_red_hat_directory_server/assembly_enabling-members-of-a-group-to-back-up-directory-server-and-performing-the-backup-as-one-of-the-group-members_securing-rhds
5.4.9. Persistent Device Numbers
5.4.9. Persistent Device Numbers Major and minor device numbers are allocated dynamically at module load. Some applications work best if the block device always is activated with the same device (major and minor) number. You can specify these with the lvcreate and the lvchange commands by using the following arguments: Use a large minor number to be sure that it has not already been allocated to another device dynamically. If you are exporting a file system using NFS, specifying the fsid parameter in the exports file may avoid the need to set a persistent device number within LVM.
[ "--persistent y --major major --minor minor" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/persistent_numbers
Chapter 30. Jira
Chapter 30. Jira Both producer and consumer are supported The JIRA component interacts with the JIRA API by encapsulating Atlassian's REST Java Client for JIRA . It currently provides polling for new issues and new comments. It is also able to create new issues, add comments, change issues, add/remove watchers, add attachment and transition the state of an issue. Rather than webhooks, this endpoint relies on simple polling. Reasons include: Concern for reliability/stability The types of payloads we're polling aren't typically large (plus, paging is available in the API) The need to support apps running somewhere not publicly accessible where a webhook would fail Note that the JIRA API is fairly expansive. Therefore, this component could be easily expanded to provide additional interactions. Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-jira</artifactId> <version>USD{camel-version}</version> </dependency> 30.1. URI format The Jira type accepts the following operations: For consumers: newIssues: retrieve only new issues after the route is started newComments: retrieve only new comments after the route is started watchUpdates: retrieve only updated fields/issues based on provided jql For producers: addIssue: add an issue addComment: add a comment on a given issue attach: add an attachment on a given issue deleteIssue: delete a given issue updateIssue: update fields of a given issue transitionIssue: transition a status of a given issue watchers: add/remove watchers of a given issue As Jira is fully customizable, you must assure the fields IDs exists for the project and workflow, as they can change between different Jira servers. 30.2. Configuring Options Camel components are configured on two separate levels: component level endpoint level 30.2.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 30.2.2. Configuring Endpoint Options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 30.3. Component Options The Jira component supports 12 options, which are listed below. Name Description Default Type delay (common) Time in milliseconds to elapse for the poll. 6000 Integer jiraUrl (common) Required The Jira server url, example: . String bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean configuration (advanced) To use a shared base jira configuration. JiraConfiguration accessToken (security) (OAuth only) The access token generated by the Jira server. String consumerKey (security) (OAuth only) The consumer key from Jira settings. String password (security) (Basic authentication only) The password to authenticate to the Jira server. Use only if username basic authentication is used. String privateKey (security) (OAuth only) The private key generated by the client to encrypt the conversation to the server. String username (security) (Basic authentication only) The username to authenticate to the Jira server. Use only if OAuth is not enabled on the Jira server. Do not set the username and OAuth token parameter, if they are both set, the username basic authentication takes precedence. String verificationCode (security) (OAuth only) The verification code from Jira generated in the first step of the authorization proccess. String 30.4. Endpoint Options The Jira endpoint is configured using URI syntax: with the following path and query parameters: 30.4.1. Path Parameters (1 parameters) Name Description Default Type type (common) Required Operation to perform. Consumers: NewIssues, NewComments. Producers: AddIssue, AttachFile, DeleteIssue, TransitionIssue, UpdateIssue, Watchers. See this class javadoc description for more information. Enum values: ADDCOMMENT ADDISSUE ATTACH DELETEISSUE NEWISSUES NEWCOMMENTS WATCHUPDATES UPDATEISSUE TRANSITIONISSUE WATCHERS ADDISSUELINK ADDWORKLOG FETCHISSUE FETCHCOMMENTS JiraType 30.4.2. Query Parameters (16 parameters) Name Description Default Type delay (common) Time in milliseconds to elapse for the poll. 6000 Integer jiraUrl (common) Required The Jira server url, example: . String bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean jql (consumer) JQL is the query language from JIRA which allows you to retrieve the data you want. For example jql=project=MyProject Where MyProject is the product key in Jira. It is important to use the RAW() and set the JQL inside it to prevent camel parsing it, example: RAW(project in (MYP, COM) AND resolution = Unresolved). String maxResults (consumer) Max number of issues to search for. 50 Integer sendOnlyUpdatedField (consumer) Indicator for sending only changed fields in exchange body or issue object. By default consumer sends only changed fields. true boolean watchedFields (consumer) Comma separated list of fields to watch for changes. Status,Priority are the defaults. Status,Priority String exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean accessToken (security) (OAuth only) The access token generated by the Jira server. String consumerKey (security) (OAuth only) The consumer key from Jira settings. String password (security) (Basic authentication only) The password to authenticate to the Jira server. Use only if username basic authentication is used. String privateKey (security) (OAuth only) The private key generated by the client to encrypt the conversation to the server. String username (security) (Basic authentication only) The username to authenticate to the Jira server. Use only if OAuth is not enabled on the Jira server. Do not set the username and OAuth token parameter, if they are both set, the username basic authentication takes precedence. String verificationCode (security) (OAuth only) The verification code from Jira generated in the first step of the authorization proccess. String 30.5. Client Factory You can bind the JiraRestClientFactory with name JiraRestClientFactory in the registry to have it automatically set in the Jira endpoint. 30.6. Authentication Camel-jira supports Basic Authentication and OAuth 3 legged authentication . We recommend to use OAuth whenever possible, as it provides the best security for your users and system. 30.6.1. Basic authentication requirements: An username and password 30.6.2. OAuth authentication requirements: Follow the tutorial in Jira OAuth documentation to generate the client private key, consumer key, verification code and access token. a private key, generated locally on your system. A verification code, generated by Jira server. The consumer key, set in the Jira server settings. An access token, generated by Jira server. 30.7. JQL The JQL URI option is used by both consumer endpoints. Theoretically, items like "project key", etc. could be URI options themselves. However, by requiring the use of JQL, the consumers become much more flexible and powerful. At the bare minimum, the consumers will require the following: One important thing to note is that the newIssues consumer will automatically set the JQL as: append ORDER BY key desc to your JQL prepend id > latestIssueId to retrieve issues added after the camel route was started. This is in order to optimize startup processing, rather than having to index every single issue in the project. Another note is that, similarly, the newComments consumer will have to index every single issue and comment in the project. Therefore, for large projects, it's vital to optimize the JQL expression as much as possible. For example, the JIRA Toolkit Plugin includes a "Number of comments" custom field - use '"Number of comments" > 0' in your query. Also try to minimize based on state (status=Open), increase the polling delay, etc. Example: 30.8. Operations See a list of required headers to set when using the Jira operations. The author field for the producers is automatically set to the authenticated user in the Jira side. If any required field is not set, then an IllegalArgumentException is throw. There are operations that requires id for fields suchs as: issue type, priority, transition. Check the valid id on your jira project as they may differ on a jira installation and project workflow. 30.9. AddIssue Required: ProjectKey : The project key, example: CAMEL, HHH, MYP. IssueTypeId or IssueTypeName : The id of the issue type or the name of the issue type, you can see the valid list in http://jira_server/rest/api/2/issue/createmeta?projectKeys=SAMPLE_KEY . IssueSummary : The summary of the issue. Optional: IssueAssignee : the assignee user IssuePriorityId or IssuePriorityName : The priority of the issue, you can see the valid list in http://jira_server/rest/api/2/priority . IssueComponents : A list of string with the valid component names. IssueWatchersAdd : A list of strings with the usernames to add to the watcher list. IssueDescription : The description of the issue. 30.10. AddComment Required: IssueKey : The issue key identifier. body of the exchange is the description. 30.11. Attach Only one file should attach per invocation. Required: IssueKey : The issue key identifier. body of the exchange should be of type File 30.12. DeleteIssue Required: IssueKey : The issue key identifier. 30.13. TransitionIssue Required: IssueKey : The issue key identifier. IssueTransitionId : The issue transition id . body of the exchange is the description. 30.14. UpdateIssue IssueKey : The issue key identifier. IssueTypeId or IssueTypeName : The id of the issue type or the name of the issue type, you can see the valid list in http://jira_server/rest/api/2/issue/createmeta?projectKeys=SAMPLE_KEY . IssueSummary : The summary of the issue. IssueAssignee : the assignee user IssuePriorityId or IssuePriorityName : The priority of the issue, you can see the valid list in http://jira_server/rest/api/2/priority . IssueComponents : A list of string with the valid component names. IssueDescription : The description of the issue. 30.15. Watcher IssueKey : The issue key identifier. IssueWatchersAdd : A list of strings with the usernames to add to the watcher list. IssueWatchersRemove : A list of strings with the usernames to remove from the watcher list. 30.16. WatchUpdates (consumer) watchedFields Comma separated list of fields to watch for changes i.e Status,Priority,Assignee,Components etc. sendOnlyUpdatedField By default only changed field is send as the body. All messages also contain following headers that add additional info about the change: issueKey : Key of the updated issue changed : name of the updated field (i.e Status) watchedIssues : list of all issue keys that are watched in the time of update 30.17. Spring Boot Auto-Configuration When using jira with Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-jira-starter</artifactId> </dependency> The component supports 13 options, which are listed below. Name Description Default Type camel.component.jira.access-token (OAuth only) The access token generated by the Jira server. String camel.component.jira.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.jira.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.jira.configuration To use a shared base jira configuration. The option is a org.apache.camel.component.jira.JiraConfiguration type. JiraConfiguration camel.component.jira.consumer-key (OAuth only) The consumer key from Jira settings. String camel.component.jira.delay Time in milliseconds to elapse for the poll. 6000 Integer camel.component.jira.enabled Whether to enable auto configuration of the jira component. This is enabled by default. Boolean camel.component.jira.jira-url The Jira server url, example: http://my_jira.com:8081/ . String camel.component.jira.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.jira.password (Basic authentication only) The password to authenticate to the Jira server. Use only if username basic authentication is used. String camel.component.jira.private-key (OAuth only) The private key generated by the client to encrypt the conversation to the server. String camel.component.jira.username (Basic authentication only) The username to authenticate to the Jira server. Use only if OAuth is not enabled on the Jira server. Do not set the username and OAuth token parameter, if they are both set, the username basic authentication takes precedence. String camel.component.jira.verification-code (OAuth only) The verification code from Jira generated in the first step of the authorization proccess. String
[ "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-jira</artifactId> <version>USD{camel-version}</version> </dependency>", "jira://type[?options]", "jira:type", "jira://[type]?[required options]&jql=project=[project key]", "jira://[type]?[required options]&jql=RAW(project=[project key] AND status in (Open, \\\"Coding In Progress\\\") AND \\\"Number of comments\\\">0)\"", "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-jira-starter</artifactId> </dependency>" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_for_spring_boot/3.20/html/camel_spring_boot_reference/csb-camel-jira-component-starter
Chapter 3. Downloading Eclipse Temurin distributions
Chapter 3. Downloading Eclipse Temurin distributions You can download Eclipse Temurin distribution from numerous sources, such as the Adoptium website. Both the Adoptium main web page and the Eclipse Temurin web page include several download buttons for downloading different Eclipse Temurin distribution. Procedure Choose one of the following options to download an Eclipse Temurin distribution: From the Adoptium home web page or from the Eclipse Temurin project page , click one of the following buttons from the web page: Latest LTS Release button that preselects Red Hat build of OpenJDK 17 for the platform that it detects you are using and immediately begins downloading that selection. Other platforms and versions button that directs to a selection of all platform and version options, where you can choose the distribution that best suits your needs from the various formats such as archives, JRE archives and installers. Release archive button that directs to a selection of latest releases, older releases, and nightly beta releases. Adoptium provides older releases and beta releases for development purposes only. Beta releases contain the most recent changes delivered into Red Hat build of OpenJDK, which you'll find useful for verifying fixes in development mode. Beta releases are not considered production ready and are not directly supported by Red Hat. Use the Adoptium API, see the Swagger UI v3 documentation Eclipse Temurin . From the Eclipse Temurin Docker Hub Official Images, see the eclipse-temurin documentation (docker hub ) . Use the Eclipse Temurin Marketplace and Marketplace API, by going to the Adoptium TM Marketplace web page. This web page lists various distributions, such as the Red Hat build of Red Hat build of OpenJDK and Eclipse Temurin distributions. Additionally, you can make a request to the Adoptium Marketplace API v1 to serve up these distributions. For Packages.adoptium.net , see the relevant steps outlined in Eclipse Temurin Linux (RPM/DEB) Installer Packages (Adoptium) . Additional resources Adoptium home web page (Adoptium) Eclipse Temurin (Adoptium)
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/getting_started_with_eclipse_temurin/downloading-temurin11_openjdk
Chapter 2. Image Registry Operator in OpenShift Container Platform
Chapter 2. Image Registry Operator in OpenShift Container Platform 2.1. Image Registry on cloud platforms and OpenStack The Image Registry Operator installs a single instance of the OpenShift image registry, and manages all registry configuration, including setting up registry storage. Note Storage is only automatically configured when you install an installer-provisioned infrastructure cluster on AWS, Azure, GCP, IBM, or OpenStack. When you install or upgrade an installer-provisioned infrastructure cluster on AWS, Azure, GCP, IBM, or OpenStack, the Image Registry Operator sets the spec.storage.managementState parameter to Managed . If the spec.storage.managementState parameter is set to Unmanaged , the Image Registry Operator takes no action related to storage. After the control plane deploys, the Operator creates a default configs.imageregistry.operator.openshift.io resource instance based on configuration detected in the cluster. If insufficient information is available to define a complete configs.imageregistry.operator.openshift.io resource, the incomplete resource is defined and the Operator updates the resource status with information about what is missing. The Image Registry Operator runs in the openshift-image-registry namespace, and manages the registry instance in that location as well. All configuration and workload resources for the registry reside in that namespace. Important The Image Registry Operator's behavior for managing the pruner is orthogonal to the managementState specified on the ClusterOperator object for the Image Registry Operator. If the Image Registry Operator is not in the Managed state, the image pruner can still be configured and managed by the Pruning custom resource. However, the managementState of the Image Registry Operator alters the behavior of the deployed image pruner job: Managed : the --prune-registry flag for the image pruner is set to true . Removed : the --prune-registry flag for the image pruner is set to false , meaning it only prunes image metatdata in etcd. 2.2. Image Registry on bare metal, Nutanix, and vSphere 2.2.1. Image registry removed during installation On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed . This allows openshift-installer to complete installations on these platform types. After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed . 2.3. Image Registry Operator distribution across availability zones The default configuration of the Image Registry Operator spreads image registry pods across topology zones to prevent delayed recovery times in case of a complete zone failure where all pods are impacted. The Image Registry Operator defaults to the following when deployed with a zone-related topology constraint: Image Registry Operator deployed with a zone related topology constraint topologySpreadConstraints: - labelSelector: matchLabels: docker-registry: default maxSkew: 1 topologyKey: kubernetes.io/hostname whenUnsatisfiable: DoNotSchedule - labelSelector: matchLabels: docker-registry: default maxSkew: 1 topologyKey: node-role.kubernetes.io/worker whenUnsatisfiable: DoNotSchedule - labelSelector: matchLabels: docker-registry: default maxSkew: 1 topologyKey: topology.kubernetes.io/zone whenUnsatisfiable: DoNotSchedule The Image Registry Operator defaults to the following when deployed without a zone-related topology constraint, which applies to bare metal and vSphere instances: Image Registry Operator deployed without a zone related topology constraint topologySpreadConstraints: - labelSelector: matchLabels: docker-registry: default maxSkew: 1 topologyKey: kubernetes.io/hostname whenUnsatisfiable: DoNotSchedule - labelSelector: matchLabels: docker-registry: default maxSkew: 1 topologyKey: node-role.kubernetes.io/worker whenUnsatisfiable: DoNotSchedule A cluster administrator can override the default topologySpreadConstraints by configuring the configs.imageregistry.operator.openshift.io/cluster spec file. In that case, only the constraints you provide apply. 2.4. Additional resources Configuring pod topology spread constraints 2.5. Image Registry Operator configuration parameters The configs.imageregistry.operator.openshift.io resource offers the following configuration parameters. Parameter Description managementState Managed : The Operator updates the registry as configuration resources are updated. Unmanaged : The Operator ignores changes to the configuration resources. Removed : The Operator removes the registry instance and tear down any storage that the Operator provisioned. logLevel Sets logLevel of the registry instance. Defaults to Normal . The following values for logLevel are supported: Normal Debug Trace TraceAll httpSecret Value needed by the registry to secure uploads, generated by default. operatorLogLevel The operatorLogLevel configuration parameter provides intent-based logging for the Operator itself and a simple way to manage coarse-grained logging choices that Operators must interpret for themselves. This configuration parameter defaults to Normal . It does not provide fine-grained control. The following values for operatorLogLevel are supported: Normal Debug Trace TraceAll proxy Defines the Proxy to be used when calling master API and upstream registries. storage Storagetype : Details for configuring registry storage, for example S3 bucket coordinates. Normally configured by default. readOnly Indicates whether the registry instance should reject attempts to push new images or delete existing ones. requests API Request Limit details. Controls how many parallel requests a given registry instance will handle before queuing additional requests. defaultRoute Determines whether or not an external route is defined using the default hostname. If enabled, the route uses re-encrypt encryption. Defaults to false . routes Array of additional routes to create. You provide the hostname and certificate for the route. rolloutStrategy Defines rollout strategy for the image registry deployment. Defaults to RollingUpdate . replicas Replica count for the registry. disableRedirect Controls whether to route all data through the registry, rather than redirecting to the back end. Defaults to false . spec.storage.managementState The Image Registry Operator sets the spec.storage.managementState parameter to Managed on new installations or upgrades of clusters using installer-provisioned infrastructure on AWS or Azure. Managed : Determines that the Image Registry Operator manages underlying storage. If the Image Registry Operator's managementState is set to Removed , then the storage is deleted. If the managementState is set to Managed , the Image Registry Operator attempts to apply some default configuration on the underlying storage unit. For example, if set to Managed , the Operator tries to enable encryption on the S3 bucket before making it available to the registry. If you do not want the default settings to be applied on the storage you are providing, make sure the managementState is set to Unmanaged . Unmanaged : Determines that the Image Registry Operator ignores the storage settings. If the Image Registry Operator's managementState is set to Removed , then the storage is not deleted. If you provided an underlying storage unit configuration, such as a bucket or container name, and the spec.storage.managementState is not yet set to any value, then the Image Registry Operator configures it to Unmanaged . 2.6. Enable the Image Registry default route with the Custom Resource Definition In OpenShift Container Platform, the Registry Operator controls the OpenShift image registry feature. The Operator is defined by the configs.imageregistry.operator.openshift.io Custom Resource Definition (CRD). If you need to automatically enable the Image Registry default route, patch the Image Registry Operator CRD. Procedure Patch the Image Registry Operator CRD: USD oc patch configs.imageregistry.operator.openshift.io/cluster --type merge -p '{"spec":{"defaultRoute":true}}' 2.7. Configuring additional trust stores for image registry access The image.config.openshift.io/cluster custom resource can contain a reference to a config map that contains additional certificate authorities to be trusted during image registry access. Prerequisites The certificate authorities (CA) must be PEM-encoded. Procedure You can create a config map in the openshift-config namespace and use its name in AdditionalTrustedCA in the image.config.openshift.io custom resource to provide additional CAs that should be trusted when contacting external registries. The config map key is the hostname of a registry with the port for which this CA is to be trusted, and the PEM certificate content is the value, for each additional registry CA to trust. Image registry CA config map example apiVersion: v1 kind: ConfigMap metadata: name: my-registry-ca data: registry.example.com: | -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- registry-with-port.example.com..5000: | 1 -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- 1 If the registry has the port, such as registry-with-port.example.com:5000 , : should be replaced with .. . You can configure additional CAs with the following procedure. To configure an additional CA: USD oc create configmap registry-config --from-file=<external_registry_address>=ca.crt -n openshift-config USD oc edit image.config.openshift.io cluster spec: additionalTrustedCA: name: registry-config 2.8. Configuring storage credentials for the Image Registry Operator In addition to the configs.imageregistry.operator.openshift.io and ConfigMap resources, storage credential configuration is provided to the Operator by a separate secret resource located within the openshift-image-registry namespace. The image-registry-private-configuration-user secret provides credentials needed for storage access and management. It overrides the default credentials used by the Operator, if default credentials were found. Procedure Create an OpenShift Container Platform secret that contains the required keys. USD oc create secret generic image-registry-private-configuration-user --from-literal=KEY1=value1 --from-literal=KEY2=value2 --namespace openshift-image-registry 2.9. Additional resources Configuring the registry for AWS user-provisioned infrastructure Configuring the registry for GCP user-provisioned infrastructure Configuring the registry for Azure user-provisioned infrastructure Configuring the registry for bare metal Configuring the registry for vSphere
[ "topologySpreadConstraints: - labelSelector: matchLabels: docker-registry: default maxSkew: 1 topologyKey: kubernetes.io/hostname whenUnsatisfiable: DoNotSchedule - labelSelector: matchLabels: docker-registry: default maxSkew: 1 topologyKey: node-role.kubernetes.io/worker whenUnsatisfiable: DoNotSchedule - labelSelector: matchLabels: docker-registry: default maxSkew: 1 topologyKey: topology.kubernetes.io/zone whenUnsatisfiable: DoNotSchedule", "topologySpreadConstraints: - labelSelector: matchLabels: docker-registry: default maxSkew: 1 topologyKey: kubernetes.io/hostname whenUnsatisfiable: DoNotSchedule - labelSelector: matchLabels: docker-registry: default maxSkew: 1 topologyKey: node-role.kubernetes.io/worker whenUnsatisfiable: DoNotSchedule", "oc patch configs.imageregistry.operator.openshift.io/cluster --type merge -p '{\"spec\":{\"defaultRoute\":true}}'", "apiVersion: v1 kind: ConfigMap metadata: name: my-registry-ca data: registry.example.com: | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- registry-with-port.example.com..5000: | 1 -----BEGIN CERTIFICATE----- -----END CERTIFICATE-----", "oc create configmap registry-config --from-file=<external_registry_address>=ca.crt -n openshift-config", "oc edit image.config.openshift.io cluster", "spec: additionalTrustedCA: name: registry-config", "oc create secret generic image-registry-private-configuration-user --from-literal=KEY1=value1 --from-literal=KEY2=value2 --namespace openshift-image-registry" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/registry/configuring-registry-operator
Chapter 3. Setting up and configuring the registry
Chapter 3. Setting up and configuring the registry 3.1. Configuring the registry for AWS user-provisioned infrastructure 3.1.1. Configuring a secret for the Image Registry Operator In addition to the configs.imageregistry.operator.openshift.io and ConfigMap resources, configuration is provided to the Operator by a separate secret resource located within the openshift-image-registry namespace. The image-registry-private-configuration-user secret provides credentials needed for storage access and management. It overrides the default credentials used by the Operator, if default credentials were found. For S3 on AWS storage, the secret is expected to contain two keys: REGISTRY_STORAGE_S3_ACCESSKEY REGISTRY_STORAGE_S3_SECRETKEY Procedure Create an OpenShift Container Platform secret that contains the required keys. USD oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=myaccesskey --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=mysecretkey --namespace openshift-image-registry 3.1.2. Configuring registry storage for AWS with user-provisioned infrastructure During installation, your cloud credentials are sufficient to create an Amazon S3 bucket and the Registry Operator will automatically configure storage. If the Registry Operator cannot create an S3 bucket and automatically configure storage, you can create an S3 bucket and configure storage with the following procedure. Prerequisites You have a cluster on AWS with user-provisioned infrastructure. For Amazon S3 storage, the secret is expected to contain two keys: REGISTRY_STORAGE_S3_ACCESSKEY REGISTRY_STORAGE_S3_SECRETKEY Procedure Use the following procedure if the Registry Operator cannot create an S3 bucket and automatically configure storage. Set up a Bucket Lifecycle Policy to abort incomplete multipart uploads that are one day old. Fill in the storage configuration in configs.imageregistry.operator.openshift.io/cluster : USD oc edit configs.imageregistry.operator.openshift.io/cluster Example configuration storage: s3: bucket: <bucket-name> region: <region-name> Warning To secure your registry images in AWS, block public access to the S3 bucket. 3.1.3. Image Registry Operator configuration parameters for AWS S3 The following configuration parameters are available for AWS S3 registry storage. The image registry spec.storage.s3 configuration parameter holds the information to configure the registry to use the AWS S3 service for back-end storage. See the S3 storage driver documentation for more information. Parameter Description bucket Bucket is the bucket name in which you want to store the registry's data. It is optional and is generated if not provided. region Region is the AWS region in which your bucket exists. It is optional and is set based on the installed AWS Region. regionEndpoint RegionEndpoint is the endpoint for S3 compatible storage services. It is optional and defaults based on the Region that is provided. virtualHostedStyle VirtualHostedStyle enables using S3 virtual hosted style bucket paths with a custom RegionEndpoint. It is optional and defaults to false. Set this parameter to deploy OpenShift Container Platform to hidden regions. encrypt Encrypt specifies whether or not the registry stores the image in encrypted format. It is optional and defaults to false. keyID KeyID is the KMS key ID to use for encryption. It is optional. Encrypt must be true, or this parameter is ignored. cloudFront CloudFront configures Amazon Cloudfront as the storage middleware in a registry. It is optional. trustedCA The namespace for the config map referenced by trustedCA is openshift-config . The key for the bundle in the config map is ca-bundle.crt . It is optional. Note When the value of the regionEndpoint parameter is configured to a URL of a Rados Gateway, an explicit port must not be specified. For example: regionEndpoint: http://rook-ceph-rgw-ocs-storagecluster-cephobjectstore.openshift-storage.svc.cluster.local 3.2. Configuring the registry for GCP user-provisioned infrastructure 3.2.1. Configuring a secret for the Image Registry Operator In addition to the configs.imageregistry.operator.openshift.io and ConfigMap resources, configuration is provided to the Operator by a separate secret resource located within the openshift-image-registry namespace. The image-registry-private-configuration-user secret provides credentials needed for storage access and management. It overrides the default credentials used by the Operator, if default credentials were found. For GCS on GCP storage, the secret is expected to contain one key whose value is the contents of a credentials file provided by GCP: REGISTRY_STORAGE_GCS_KEYFILE Procedure Create an OpenShift Container Platform secret that contains the required keys. USD oc create secret generic image-registry-private-configuration-user --from-file=REGISTRY_STORAGE_GCS_KEYFILE=<path_to_keyfile> --namespace openshift-image-registry 3.2.2. Configuring the registry storage for GCP with user-provisioned infrastructure If the Registry Operator cannot create a Google Cloud Platform (GCP) bucket, you must set up the storage medium manually and configure the settings in the registry custom resource (CR). Prerequisites A cluster on GCP with user-provisioned infrastructure. To configure registry storage for GCP, you need to provide Registry Operator cloud credentials. For GCS on GCP storage, the secret is expected to contain one key whose value is the contents of a credentials file provided by GCP: REGISTRY_STORAGE_GCS_KEYFILE Procedure Set up an Object Lifecycle Management policy to abort incomplete multipart uploads that are one day old. Fill in the storage configuration in configs.imageregistry.operator.openshift.io/cluster : USD oc edit configs.imageregistry.operator.openshift.io/cluster Example configuration # ... storage: gcs: bucket: <bucket-name> projectID: <project-id> region: <region-name> # ... Warning You can secure your registry images that use a Google Cloud Storage bucket by setting public access prevention . 3.2.3. Image Registry Operator configuration parameters for GCP GCS The following configuration parameters are available for GCP GCS registry storage. Parameter Description bucket Bucket is the bucket name in which you want to store the registry's data. It is optional and is generated if not provided. region Region is the GCS location in which your bucket exists. It is optional and is set based on the installed GCS Region. projectID ProjectID is the Project ID of the GCP project that this bucket should be associated with. It is optional. keyID KeyID is the KMS key ID to use for encryption. It is optional because buckets are encrypted by default on GCP. This allows for the use of a custom encryption key. 3.3. Configuring the registry for OpenStack user-provisioned infrastructure You can configure the registry of a cluster that runs on your own Red Hat OpenStack Platform (RHOSP) infrastructure. 3.3.1. Configuring Image Registry Operator redirects By disabling redirects, you can configure the Image Registry Operator to control whether clients such as OpenShift Container Platform cluster builds or external systems like developer machines are redirected to pull images directly from Red Hat OpenStack Platform (RHOSP) Swift storage. This configuration is optional and depends on whether the clients trust the storage's SSL/TLS certificates. Note In situations where clients to not trust the storage certificate, setting the disableRedirect option can be set to true proxies traffic through the image registry. Consequently, however, the image registry might require more resources, especially network bandwidth, to handle the increased load. Alternatively, if clients trust the storage certificate, the registry can allow redirects. This reduces resource demand on the registry itself. Some users might prefer to configure their clients to trust their self-signed certificate authorities (CAs) instead of disabling redirects. If you are using a self-signed CA, you must decide between trusting the custom CAs or disabling redirects. Procedure To ensures that the image registry proxies traffic instead of relying on Swift storage, change the value of the spec.disableRedirect field in the config.imageregistry object to true by running the following command: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"disableRedirect":true}}' 3.3.2. Configuring a secret for the Image Registry Operator In addition to the configs.imageregistry.operator.openshift.io and ConfigMap resources, configuration is provided to the Operator by a separate secret resource located within the openshift-image-registry namespace. The image-registry-private-configuration-user secret provides credentials needed for storage access and management. It overrides the default credentials used by the Operator, if default credentials were found. For Swift on Red Hat OpenStack Platform (RHOSP) storage, the secret is expected to contain the following two keys: REGISTRY_STORAGE_SWIFT_USERNAME REGISTRY_STORAGE_SWIFT_PASSWORD Procedure Create an OpenShift Container Platform secret that contains the required keys. USD oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_SWIFT_USERNAME=<username> --from-literal=REGISTRY_STORAGE_SWIFT_PASSWORD=<password> -n openshift-image-registry 3.3.3. Registry storage for RHOSP with user-provisioned infrastructure If the Registry Operator cannot create a Swift bucket, you must set up the storage medium manually and configure the settings in the registry custom resource (CR). Prerequisites A cluster on Red Hat OpenStack Platform (RHOSP) with user-provisioned infrastructure. To configure registry storage for RHOSP, you need to provide Registry Operator cloud credentials. For Swift on RHOSP storage, the secret is expected to contain the following two keys: REGISTRY_STORAGE_SWIFT_USERNAME REGISTRY_STORAGE_SWIFT_PASSWORD Procedure Fill in the storage configuration in configs.imageregistry.operator.openshift.io/cluster : USD oc edit configs.imageregistry.operator.openshift.io/cluster Example configuration # ... storage: swift: container: <container-id> # ... 3.3.4. Image Registry Operator configuration parameters for RHOSP Swift The following configuration parameters are available for Red Hat OpenStack Platform (RHOSP) Swift registry storage. Parameter Description authURL Defines the URL for obtaining the authentication token. This value is optional. authVersion Specifies the Auth version of RHOSP, for example, authVersion: "3" . This value is optional. container Defines the name of a Swift container for storing registry data. This value is optional. domain Specifies the RHOSP domain name for the Identity v3 API. This value is optional. domainID Specifies the RHOSP domain ID for the Identity v3 API. This value is optional. tenant Defines the RHOSP tenant name to be used by the registry. This value is optional. tenantID Defines the RHOSP tenant ID to be used by the registry. This value is optional. regionName Defines the RHOSP region in which the container exists. This value is optional. 3.4. Configuring the registry for Azure user-provisioned infrastructure 3.4.1. Configuring a secret for the Image Registry Operator In addition to the configs.imageregistry.operator.openshift.io and ConfigMap resources, configuration is provided to the Operator by a separate secret resource located within the openshift-image-registry namespace. The image-registry-private-configuration-user secret provides credentials needed for storage access and management. It overrides the default credentials used by the Operator, if default credentials were found. For Azure registry storage, the secret is expected to contain one key whose value is the contents of a credentials file provided by Azure: REGISTRY_STORAGE_AZURE_ACCOUNTKEY Procedure Create an OpenShift Container Platform secret that contains the required key. USD oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_AZURE_ACCOUNTKEY=<accountkey> --namespace openshift-image-registry 3.4.2. Configuring registry storage for Azure During installation, your cloud credentials are sufficient to create Azure Blob Storage, and the Registry Operator automatically configures storage. Prerequisites A cluster on Azure with user-provisioned infrastructure. To configure registry storage for Azure, provide Registry Operator cloud credentials. For Azure storage the secret is expected to contain one key: REGISTRY_STORAGE_AZURE_ACCOUNTKEY Procedure Create an Azure storage container . Fill in the storage configuration in configs.imageregistry.operator.openshift.io/cluster : USD oc edit configs.imageregistry.operator.openshift.io/cluster Example configuration storage: azure: accountName: <storage-account-name> container: <container-name> 3.4.3. Configuring registry storage for Azure Government During installation, your cloud credentials are sufficient to create Azure Blob Storage, and the Registry Operator automatically configures storage. Prerequisites A cluster on Azure with user-provisioned infrastructure in a government region. To configure registry storage for Azure, provide Registry Operator cloud credentials. For Azure storage, the secret is expected to contain one key: REGISTRY_STORAGE_AZURE_ACCOUNTKEY Procedure Create an Azure storage container . Fill in the storage configuration in configs.imageregistry.operator.openshift.io/cluster : USD oc edit configs.imageregistry.operator.openshift.io/cluster Example configuration storage: azure: accountName: <storage-account-name> container: <container-name> cloudName: AzureUSGovernmentCloud 1 1 cloudName is the name of the Azure cloud environment, which can be used to configure the Azure SDK with the appropriate Azure API endpoints. Defaults to AzurePublicCloud . You can also set cloudName to AzureUSGovernmentCloud , AzureChinaCloud , or AzureGermanCloud with sufficient credentials. 3.5. Configuring the registry for RHOSP 3.5.1. Configuring an image registry with custom storage on clusters that run on RHOSP After you install a cluster on Red Hat OpenStack Platform (RHOSP), you can use a Cinder volume that is in a specific availability zone for registry storage. Procedure Create a YAML file that specifies the storage class and availability zone to use. For example: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: custom-csi-storageclass provisioner: cinder.csi.openstack.org volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true parameters: availability: <availability_zone_name> Note OpenShift Container Platform does not verify the existence of the availability zone you choose. Verify the name of the availability zone before you apply the configuration. From a command line, apply the configuration: USD oc apply -f <storage_class_file_name> Example output storageclass.storage.k8s.io/custom-csi-storageclass created Create a YAML file that specifies a persistent volume claim (PVC) that uses your storage class and the openshift-image-registry namespace. For example: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: csi-pvc-imageregistry namespace: openshift-image-registry 1 annotations: imageregistry.openshift.io: "true" spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 100Gi 2 storageClassName: <your_custom_storage_class> 3 1 Enter the namespace openshift-image-registry . This namespace allows the Cluster Image Registry Operator to consume the PVC. 2 Optional: Adjust the volume size. 3 Enter the name of the storage class that you created. From a command line, apply the configuration: USD oc apply -f <pvc_file_name> Example output persistentvolumeclaim/csi-pvc-imageregistry created Replace the original persistent volume claim in the image registry configuration with the new claim: USD oc patch configs.imageregistry.operator.openshift.io/cluster --type 'json' -p='[{"op": "replace", "path": "/spec/storage/pvc/claim", "value": "csi-pvc-imageregistry"}]' Example output config.imageregistry.operator.openshift.io/cluster patched Over the several minutes, the configuration is updated. Verification To confirm that the registry is using the resources that you defined: Verify that the PVC claim value is identical to the name that you provided in your PVC definition: USD oc get configs.imageregistry.operator.openshift.io/cluster -o yaml Example output ... status: ... managementState: Managed pvc: claim: csi-pvc-imageregistry ... Verify that the status of the PVC is Bound : USD oc get pvc -n openshift-image-registry csi-pvc-imageregistry Example output NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE csi-pvc-imageregistry Bound pvc-72a8f9c9-f462-11e8-b6b6-fa163e18b7b5 100Gi RWO custom-csi-storageclass 11m 3.6. Configuring the registry for bare metal 3.6.1. Image registry removed during installation On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed . This allows openshift-installer to complete installations on these platform types. After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed . When this has completed, you must configure storage. 3.6.2. Changing the image registry's management state To start the image registry, you must change the Image Registry Operator configuration's managementState from Removed to Managed . Procedure Change managementState Image Registry Operator configuration from Removed to Managed . For example: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"managementState":"Managed"}}' 3.6.3. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 3.6.3.1. Configuring registry storage for bare metal and other manual installations As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have a cluster that uses manually-provisioned Red Hat Enterprise Linux CoreOS (RHCOS) nodes, such as bare metal. You have provisioned persistent storage for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have 100Gi capacity. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resources found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.15 True False False 6h50m Ensure that your registry is set to managed to enable building and pushing of images. Run: Then, change the line to 3.6.3.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 3.6.3.3. Configuring block registry storage for bare metal To allow the image registry to use block storage types during upgrades as a cluster administrator, you can use the Recreate rollout strategy. Important Block storage volumes, or block persistent volumes, are supported but not recommended for use with the image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica. If you choose to use a block storage volume with the image registry, you must use a filesystem persistent volume claim (PVC). Procedure Enter the following command to set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy, and runs with only one ( 1 ) replica: USD oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4 1 A unique name that represents the PersistentVolumeClaim object. 2 The namespace for the PersistentVolumeClaim object, which is openshift-image-registry . 3 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 4 The size of the persistent volume claim. Enter the following command to create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml -n openshift-image-registry Enter the following command to edit the registry configuration so that it references the correct PVC: USD oc edit config.imageregistry.operator.openshift.io -o yaml Example output storage: pvc: claim: 1 1 By creating a custom PVC, you can leave the claim field blank for the default automatic creation of an image-registry-storage PVC. 3.6.3.4. Configuring the Image Registry Operator to use Ceph RGW storage with Red Hat OpenShift Data Foundation Red Hat OpenShift Data Foundation integrates multiple storage types that you can use with the OpenShift image registry: Ceph, a shared and distributed file system and on-premises object storage NooBaa, providing a Multicloud Object Gateway This document outlines the procedure to configure the image registry to use Ceph RGW storage. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have access to the OpenShift Container Platform web console. You installed the oc CLI. You installed the OpenShift Data Foundation Operator to provide object storage and Ceph RGW object storage. Procedure Create the object bucket claim using the ocs-storagecluster-ceph-rgw storage class. For example: cat <<EOF | oc apply -f - apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: rgwbucket namespace: openshift-storage 1 spec: storageClassName: ocs-storagecluster-ceph-rgw generateBucketName: rgwbucket EOF 1 Alternatively, you can use the openshift-image-registry namespace. Get the bucket name by entering the following command: USD bucket_name=USD(oc get obc -n openshift-storage rgwbucket -o jsonpath='{.spec.bucketName}') Get the AWS credentials by entering the following commands: USD AWS_ACCESS_KEY_ID=USD(oc get secret -n openshift-storage rgwbucket -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 --decode) USD AWS_SECRET_ACCESS_KEY=USD(oc get secret -n openshift-storage rgwbucket -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 --decode) Create the secret image-registry-private-configuration-user with the AWS credentials for the new bucket under openshift-image-registry project by entering the following command: USD oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=USD{AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=USD{AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry Get the route host by entering the following command: USD route_host=USD(oc get route ocs-storagecluster-cephobjectstore -n openshift-storage --template='{{ .spec.host }}') Create a config map that uses an ingress certificate by entering the following commands: USD oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // "router-certs-default"' -r) -n openshift-ingress --confirm USD oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config Configure the image registry to use the Ceph RGW object storage by entering the following command: USD oc patch config.image/cluster -p '{"spec":{"managementState":"Managed","replicas":2,"storage":{"managementState":"Unmanaged","s3":{"bucket":'\"USD{bucket_name}\"',"region":"us-east-1","regionEndpoint":'\"https://USD{route_host}\"',"virtualHostedStyle":false,"encrypt":false,"trustedCA":{"name":"image-registry-s3-bundle"}}}}}' --type=merge 3.6.3.5. Configuring the Image Registry Operator to use Noobaa storage with Red Hat OpenShift Data Foundation Red Hat OpenShift Data Foundation integrates multiple storage types that you can use with the OpenShift image registry: Ceph, a shared and distributed file system and on-premises object storage NooBaa, providing a Multicloud Object Gateway This document outlines the procedure to configure the image registry to use Noobaa storage. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have access to the OpenShift Container Platform web console. You installed the oc CLI. You installed the OpenShift Data Foundation Operator to provide object storage and Noobaa object storage. Procedure Create the object bucket claim using the openshift-storage.noobaa.io storage class. For example: cat <<EOF | oc apply -f - apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: noobaatest namespace: openshift-storage 1 spec: storageClassName: openshift-storage.noobaa.io generateBucketName: noobaatest EOF 1 Alternatively, you can use the openshift-image-registry namespace. Get the bucket name by entering the following command: USD bucket_name=USD(oc get obc -n openshift-storage noobaatest -o jsonpath='{.spec.bucketName}') Get the AWS credentials by entering the following commands: USD AWS_ACCESS_KEY_ID=USD(oc get secret -n openshift-storage noobaatest -o yaml | grep -w "AWS_ACCESS_KEY_ID:" | head -n1 | awk '{print USD2}' | base64 --decode) USD AWS_SECRET_ACCESS_KEY=USD(oc get secret -n openshift-storage noobaatest -o yaml | grep -w "AWS_SECRET_ACCESS_KEY:" | head -n1 | awk '{print USD2}' | base64 --decode) Create the secret image-registry-private-configuration-user with the AWS credentials for the new bucket under openshift-image-registry project by entering the following command: USD oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=USD{AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=USD{AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry Get the route host by entering the following command: USD route_host=USD(oc get route s3 -n openshift-storage -o=jsonpath='{.spec.host}') Create a config map that uses an ingress certificate by entering the following commands: USD oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // "router-certs-default"' -r) -n openshift-ingress --confirm USD oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config Configure the image registry to use the Nooba object storage by entering the following command: USD oc patch config.image/cluster -p '{"spec":{"managementState":"Managed","replicas":2,"storage":{"managementState":"Unmanaged","s3":{"bucket":'\"USD{bucket_name}\"',"region":"us-east-1","regionEndpoint":'\"https://USD{route_host}\"',"virtualHostedStyle":false,"encrypt":false,"trustedCA":{"name":"image-registry-s3-bundle"}}}}}' --type=merge 3.6.4. Configuring the Image Registry Operator to use CephFS storage with Red Hat OpenShift Data Foundation Red Hat OpenShift Data Foundation integrates multiple storage types that you can use with the OpenShift image registry: Ceph, a shared and distributed file system and on-premises object storage NooBaa, providing a Multicloud Object Gateway This document outlines the procedure to configure the image registry to use CephFS storage. Note CephFS uses persistent volume claim (PVC) storage. It is not recommended to use PVCs for image registry storage if there are other options are available, such as Ceph RGW or Noobaa. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have access to the OpenShift Container Platform web console. You installed the oc CLI. You installed the OpenShift Data Foundation Operator to provide object storage and CephFS file storage. Procedure Create a PVC to use the cephfs storage class. For example: cat <<EOF | oc apply -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: registry-storage-pvc namespace: openshift-image-registry spec: accessModes: - ReadWriteMany resources: requests: storage: 100Gi storageClassName: ocs-storagecluster-cephfs EOF Configure the image registry to use the CephFS file system storage by entering the following command: USD oc patch config.image/cluster -p '{"spec":{"managementState":"Managed","replicas":2,"storage":{"managementState":"Unmanaged","pvc":{"claim":"registry-storage-pvc"}}}}' --type=merge 3.6.5. Additional resources Recommended configurable storage technology Configuring Image Registry to use OpenShift Data Foundation 3.7. Configuring the registry for vSphere 3.7.1. Image registry removed during installation On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed . This allows openshift-installer to complete installations on these platform types. After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed . When this has completed, you must configure storage. 3.7.2. Changing the image registry's management state To start the image registry, you must change the Image Registry Operator configuration's managementState from Removed to Managed . Procedure Change managementState Image Registry Operator configuration from Removed to Managed . For example: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"managementState":"Managed"}}' 3.7.3. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 3.7.3.1. Configuring registry storage for VMware vSphere As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites Cluster administrator permissions. A cluster on VMware vSphere. Persistent storage provisioned for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have "100Gi" capacity. Important Testing shows issues with using the NFS server on RHEL as storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended. Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resourses found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: 1 1 Leave the claim field blank to allow the automatic creation of an image-registry-storage persistent volume claim (PVC). The PVC is generated based on the default storage class. However, be aware that the default storage class might provide ReadWriteOnce (RWO) volumes, such as a RADOS Block Device (RBD), which can cause issues when you replicate to more than one replica. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m 3.7.3.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 3.7.3.3. Configuring block registry storage for VMware vSphere To allow the image registry to use block storage types such as vSphere Virtual Machine Disk (VMDK) during upgrades as a cluster administrator, you can use the Recreate rollout strategy. Important Block storage volumes are supported but not recommended for use with image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica. Procedure Enter the following command to set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy, and runs with only 1 replica: USD oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4 1 A unique name that represents the PersistentVolumeClaim object. 2 The namespace for the PersistentVolumeClaim object, which is openshift-image-registry . 3 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 4 The size of the persistent volume claim. Enter the following command to create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml -n openshift-image-registry Enter the following command to edit the registry configuration so that it references the correct PVC: USD oc edit config.imageregistry.operator.openshift.io -o yaml Example output storage: pvc: claim: 1 1 By creating a custom PVC, you can leave the claim field blank for the default automatic creation of an image-registry-storage PVC. For instructions about configuring registry storage so that it references the correct PVC, see Configuring the registry for vSphere . 3.7.3.4. Configuring the Image Registry Operator to use Ceph RGW storage with Red Hat OpenShift Data Foundation Red Hat OpenShift Data Foundation integrates multiple storage types that you can use with the OpenShift image registry: Ceph, a shared and distributed file system and on-premises object storage NooBaa, providing a Multicloud Object Gateway This document outlines the procedure to configure the image registry to use Ceph RGW storage. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have access to the OpenShift Container Platform web console. You installed the oc CLI. You installed the OpenShift Data Foundation Operator to provide object storage and Ceph RGW object storage. Procedure Create the object bucket claim using the ocs-storagecluster-ceph-rgw storage class. For example: cat <<EOF | oc apply -f - apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: rgwbucket namespace: openshift-storage 1 spec: storageClassName: ocs-storagecluster-ceph-rgw generateBucketName: rgwbucket EOF 1 Alternatively, you can use the openshift-image-registry namespace. Get the bucket name by entering the following command: USD bucket_name=USD(oc get obc -n openshift-storage rgwbucket -o jsonpath='{.spec.bucketName}') Get the AWS credentials by entering the following commands: USD AWS_ACCESS_KEY_ID=USD(oc get secret -n openshift-storage rgwbucket -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 --decode) USD AWS_SECRET_ACCESS_KEY=USD(oc get secret -n openshift-storage rgwbucket -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 --decode) Create the secret image-registry-private-configuration-user with the AWS credentials for the new bucket under openshift-image-registry project by entering the following command: USD oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=USD{AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=USD{AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry Get the route host by entering the following command: USD route_host=USD(oc get route ocs-storagecluster-cephobjectstore -n openshift-storage --template='{{ .spec.host }}') Create a config map that uses an ingress certificate by entering the following commands: USD oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // "router-certs-default"' -r) -n openshift-ingress --confirm USD oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config Configure the image registry to use the Ceph RGW object storage by entering the following command: USD oc patch config.image/cluster -p '{"spec":{"managementState":"Managed","replicas":2,"storage":{"managementState":"Unmanaged","s3":{"bucket":'\"USD{bucket_name}\"',"region":"us-east-1","regionEndpoint":'\"https://USD{route_host}\"',"virtualHostedStyle":false,"encrypt":false,"trustedCA":{"name":"image-registry-s3-bundle"}}}}}' --type=merge 3.7.3.5. Configuring the Image Registry Operator to use Noobaa storage with Red Hat OpenShift Data Foundation Red Hat OpenShift Data Foundation integrates multiple storage types that you can use with the OpenShift image registry: Ceph, a shared and distributed file system and on-premises object storage NooBaa, providing a Multicloud Object Gateway This document outlines the procedure to configure the image registry to use Noobaa storage. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have access to the OpenShift Container Platform web console. You installed the oc CLI. You installed the OpenShift Data Foundation Operator to provide object storage and Noobaa object storage. Procedure Create the object bucket claim using the openshift-storage.noobaa.io storage class. For example: cat <<EOF | oc apply -f - apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: noobaatest namespace: openshift-storage 1 spec: storageClassName: openshift-storage.noobaa.io generateBucketName: noobaatest EOF 1 Alternatively, you can use the openshift-image-registry namespace. Get the bucket name by entering the following command: USD bucket_name=USD(oc get obc -n openshift-storage noobaatest -o jsonpath='{.spec.bucketName}') Get the AWS credentials by entering the following commands: USD AWS_ACCESS_KEY_ID=USD(oc get secret -n openshift-storage noobaatest -o yaml | grep -w "AWS_ACCESS_KEY_ID:" | head -n1 | awk '{print USD2}' | base64 --decode) USD AWS_SECRET_ACCESS_KEY=USD(oc get secret -n openshift-storage noobaatest -o yaml | grep -w "AWS_SECRET_ACCESS_KEY:" | head -n1 | awk '{print USD2}' | base64 --decode) Create the secret image-registry-private-configuration-user with the AWS credentials for the new bucket under openshift-image-registry project by entering the following command: USD oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=USD{AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=USD{AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry Get the route host by entering the following command: USD route_host=USD(oc get route s3 -n openshift-storage -o=jsonpath='{.spec.host}') Create a config map that uses an ingress certificate by entering the following commands: USD oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // "router-certs-default"' -r) -n openshift-ingress --confirm USD oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config Configure the image registry to use the Nooba object storage by entering the following command: USD oc patch config.image/cluster -p '{"spec":{"managementState":"Managed","replicas":2,"storage":{"managementState":"Unmanaged","s3":{"bucket":'\"USD{bucket_name}\"',"region":"us-east-1","regionEndpoint":'\"https://USD{route_host}\"',"virtualHostedStyle":false,"encrypt":false,"trustedCA":{"name":"image-registry-s3-bundle"}}}}}' --type=merge 3.7.4. Configuring the Image Registry Operator to use CephFS storage with Red Hat OpenShift Data Foundation Red Hat OpenShift Data Foundation integrates multiple storage types that you can use with the OpenShift image registry: Ceph, a shared and distributed file system and on-premises object storage NooBaa, providing a Multicloud Object Gateway This document outlines the procedure to configure the image registry to use CephFS storage. Note CephFS uses persistent volume claim (PVC) storage. It is not recommended to use PVCs for image registry storage if there are other options are available, such as Ceph RGW or Noobaa. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have access to the OpenShift Container Platform web console. You installed the oc CLI. You installed the OpenShift Data Foundation Operator to provide object storage and CephFS file storage. Procedure Create a PVC to use the cephfs storage class. For example: cat <<EOF | oc apply -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: registry-storage-pvc namespace: openshift-image-registry spec: accessModes: - ReadWriteMany resources: requests: storage: 100Gi storageClassName: ocs-storagecluster-cephfs EOF Configure the image registry to use the CephFS file system storage by entering the following command: USD oc patch config.image/cluster -p '{"spec":{"managementState":"Managed","replicas":2,"storage":{"managementState":"Unmanaged","pvc":{"claim":"registry-storage-pvc"}}}}' --type=merge 3.7.5. Additional resources Recommended configurable storage technology Configuring Image Registry to use OpenShift Data Foundation 3.8. Configuring the registry for Red Hat OpenShift Data Foundation To configure the OpenShift image registry on bare metal and vSphere to use Red Hat OpenShift Data Foundation storage, you must install OpenShift Data Foundation and then configure image registry using Ceph or Noobaa. 3.8.1. Configuring the Image Registry Operator to use Ceph RGW storage with Red Hat OpenShift Data Foundation Red Hat OpenShift Data Foundation integrates multiple storage types that you can use with the OpenShift image registry: Ceph, a shared and distributed file system and on-premises object storage NooBaa, providing a Multicloud Object Gateway This document outlines the procedure to configure the image registry to use Ceph RGW storage. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have access to the OpenShift Container Platform web console. You installed the oc CLI. You installed the OpenShift Data Foundation Operator to provide object storage and Ceph RGW object storage. Procedure Create the object bucket claim using the ocs-storagecluster-ceph-rgw storage class. For example: cat <<EOF | oc apply -f - apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: rgwbucket namespace: openshift-storage 1 spec: storageClassName: ocs-storagecluster-ceph-rgw generateBucketName: rgwbucket EOF 1 Alternatively, you can use the openshift-image-registry namespace. Get the bucket name by entering the following command: USD bucket_name=USD(oc get obc -n openshift-storage rgwbucket -o jsonpath='{.spec.bucketName}') Get the AWS credentials by entering the following commands: USD AWS_ACCESS_KEY_ID=USD(oc get secret -n openshift-storage rgwbucket -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 --decode) USD AWS_SECRET_ACCESS_KEY=USD(oc get secret -n openshift-storage rgwbucket -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 --decode) Create the secret image-registry-private-configuration-user with the AWS credentials for the new bucket under openshift-image-registry project by entering the following command: USD oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=USD{AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=USD{AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry Get the route host by entering the following command: USD route_host=USD(oc get route ocs-storagecluster-cephobjectstore -n openshift-storage --template='{{ .spec.host }}') Create a config map that uses an ingress certificate by entering the following commands: USD oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // "router-certs-default"' -r) -n openshift-ingress --confirm USD oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config Configure the image registry to use the Ceph RGW object storage by entering the following command: USD oc patch config.image/cluster -p '{"spec":{"managementState":"Managed","replicas":2,"storage":{"managementState":"Unmanaged","s3":{"bucket":'\"USD{bucket_name}\"',"region":"us-east-1","regionEndpoint":'\"https://USD{route_host}\"',"virtualHostedStyle":false,"encrypt":false,"trustedCA":{"name":"image-registry-s3-bundle"}}}}}' --type=merge 3.8.2. Configuring the Image Registry Operator to use Noobaa storage with Red Hat OpenShift Data Foundation Red Hat OpenShift Data Foundation integrates multiple storage types that you can use with the OpenShift image registry: Ceph, a shared and distributed file system and on-premises object storage NooBaa, providing a Multicloud Object Gateway This document outlines the procedure to configure the image registry to use Noobaa storage. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have access to the OpenShift Container Platform web console. You installed the oc CLI. You installed the OpenShift Data Foundation Operator to provide object storage and Noobaa object storage. Procedure Create the object bucket claim using the openshift-storage.noobaa.io storage class. For example: cat <<EOF | oc apply -f - apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: noobaatest namespace: openshift-storage 1 spec: storageClassName: openshift-storage.noobaa.io generateBucketName: noobaatest EOF 1 Alternatively, you can use the openshift-image-registry namespace. Get the bucket name by entering the following command: USD bucket_name=USD(oc get obc -n openshift-storage noobaatest -o jsonpath='{.spec.bucketName}') Get the AWS credentials by entering the following commands: USD AWS_ACCESS_KEY_ID=USD(oc get secret -n openshift-storage noobaatest -o yaml | grep -w "AWS_ACCESS_KEY_ID:" | head -n1 | awk '{print USD2}' | base64 --decode) USD AWS_SECRET_ACCESS_KEY=USD(oc get secret -n openshift-storage noobaatest -o yaml | grep -w "AWS_SECRET_ACCESS_KEY:" | head -n1 | awk '{print USD2}' | base64 --decode) Create the secret image-registry-private-configuration-user with the AWS credentials for the new bucket under openshift-image-registry project by entering the following command: USD oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=USD{AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=USD{AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry Get the route host by entering the following command: USD route_host=USD(oc get route s3 -n openshift-storage -o=jsonpath='{.spec.host}') Create a config map that uses an ingress certificate by entering the following commands: USD oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // "router-certs-default"' -r) -n openshift-ingress --confirm USD oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config Configure the image registry to use the Nooba object storage by entering the following command: USD oc patch config.image/cluster -p '{"spec":{"managementState":"Managed","replicas":2,"storage":{"managementState":"Unmanaged","s3":{"bucket":'\"USD{bucket_name}\"',"region":"us-east-1","regionEndpoint":'\"https://USD{route_host}\"',"virtualHostedStyle":false,"encrypt":false,"trustedCA":{"name":"image-registry-s3-bundle"}}}}}' --type=merge 3.8.3. Configuring the Image Registry Operator to use CephFS storage with Red Hat OpenShift Data Foundation Red Hat OpenShift Data Foundation integrates multiple storage types that you can use with the OpenShift image registry: Ceph, a shared and distributed file system and on-premises object storage NooBaa, providing a Multicloud Object Gateway This document outlines the procedure to configure the image registry to use CephFS storage. Note CephFS uses persistent volume claim (PVC) storage. It is not recommended to use PVCs for image registry storage if there are other options are available, such as Ceph RGW or Noobaa. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have access to the OpenShift Container Platform web console. You installed the oc CLI. You installed the OpenShift Data Foundation Operator to provide object storage and CephFS file storage. Procedure Create a PVC to use the cephfs storage class. For example: cat <<EOF | oc apply -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: registry-storage-pvc namespace: openshift-image-registry spec: accessModes: - ReadWriteMany resources: requests: storage: 100Gi storageClassName: ocs-storagecluster-cephfs EOF Configure the image registry to use the CephFS file system storage by entering the following command: USD oc patch config.image/cluster -p '{"spec":{"managementState":"Managed","replicas":2,"storage":{"managementState":"Unmanaged","pvc":{"claim":"registry-storage-pvc"}}}}' --type=merge 3.8.4. Additional resources Configuring Image Registry to use OpenShift Data Foundation Performance tuning guide for Multicloud Object Gateway (NooBaa) 3.9. Configuring the registry for Nutanix By following the steps outlined in this documentation, users can optimize container image distribution, security, and access controls, enabling a robust foundation for Nutanix applications on OpenShift Container Platform 3.9.1. Image registry removed during installation On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed . This allows openshift-installer to complete installations on these platform types. After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed . When this has completed, you must configure storage. 3.9.2. Changing the image registry's management state To start the image registry, you must change the Image Registry Operator configuration's managementState from Removed to Managed . Procedure Change managementState Image Registry Operator configuration from Removed to Managed . For example: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"managementState":"Managed"}}' 3.9.3. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 3.9.3.1. Configuring registry storage for Nutanix As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have a cluster on Nutanix. You have provisioned persistent storage for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. You must have 100 Gi capacity. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resourses found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: 1 1 Leave the claim field blank to allow the automatic creation of an image-registry-storage persistent volume claim (PVC). The PVC is generated based on the default storage class. However, be aware that the default storage class might provide ReadWriteOnce (RWO) volumes, such as a RADOS Block Device (RBD), which can cause issues when you replicate to more than one replica. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.13 True False False 6h50m 3.9.3.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 3.9.3.3. Configuring block registry storage for Nutanix volumes To allow the image registry to use block storage types such as Nutanix volumes during upgrades as a cluster administrator, you can use the Recreate rollout strategy. Important Block storage volumes, or block persistent volumes, are supported but not recommended for use with the image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica. If you choose to use a block storage volume with the image registry, you must use a filesystem persistent volume claim (PVC). Procedure Enter the following command to set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy, and runs with only one ( 1 ) replica: USD oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. Create a pvc.yaml file with the following contents to define a Nutanix PersistentVolumeClaim object: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4 1 A unique name that represents the PersistentVolumeClaim object. 2 The namespace for the PersistentVolumeClaim object, which is openshift-image-registry . 3 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 4 The size of the persistent volume claim. Enter the following command to create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml -n openshift-image-registry Enter the following command to edit the registry configuration so that it references the correct PVC: USD oc edit config.imageregistry.operator.openshift.io -o yaml Example output storage: pvc: claim: 1 1 By creating a custom PVC, you can leave the claim field blank for the default automatic creation of an image-registry-storage PVC. 3.9.3.4. Configuring the Image Registry Operator to use Ceph RGW storage with Red Hat OpenShift Data Foundation Red Hat OpenShift Data Foundation integrates multiple storage types that you can use with the OpenShift image registry: Ceph, a shared and distributed file system and on-premises object storage NooBaa, providing a Multicloud Object Gateway This document outlines the procedure to configure the image registry to use Ceph RGW storage. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have access to the OpenShift Container Platform web console. You installed the oc CLI. You installed the OpenShift Data Foundation Operator to provide object storage and Ceph RGW object storage. Procedure Create the object bucket claim using the ocs-storagecluster-ceph-rgw storage class. For example: cat <<EOF | oc apply -f - apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: rgwbucket namespace: openshift-storage 1 spec: storageClassName: ocs-storagecluster-ceph-rgw generateBucketName: rgwbucket EOF 1 Alternatively, you can use the openshift-image-registry namespace. Get the bucket name by entering the following command: USD bucket_name=USD(oc get obc -n openshift-storage rgwbucket -o jsonpath='{.spec.bucketName}') Get the AWS credentials by entering the following commands: USD AWS_ACCESS_KEY_ID=USD(oc get secret -n openshift-storage rgwbucket -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 --decode) USD AWS_SECRET_ACCESS_KEY=USD(oc get secret -n openshift-storage rgwbucket -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 --decode) Create the secret image-registry-private-configuration-user with the AWS credentials for the new bucket under openshift-image-registry project by entering the following command: USD oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=USD{AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=USD{AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry Get the route host by entering the following command: USD route_host=USD(oc get route ocs-storagecluster-cephobjectstore -n openshift-storage --template='{{ .spec.host }}') Create a config map that uses an ingress certificate by entering the following commands: USD oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // "router-certs-default"' -r) -n openshift-ingress --confirm USD oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config Configure the image registry to use the Ceph RGW object storage by entering the following command: USD oc patch config.image/cluster -p '{"spec":{"managementState":"Managed","replicas":2,"storage":{"managementState":"Unmanaged","s3":{"bucket":'\"USD{bucket_name}\"',"region":"us-east-1","regionEndpoint":'\"https://USD{route_host}\"',"virtualHostedStyle":false,"encrypt":false,"trustedCA":{"name":"image-registry-s3-bundle"}}}}}' --type=merge 3.9.3.5. Configuring the Image Registry Operator to use Noobaa storage with Red Hat OpenShift Data Foundation Red Hat OpenShift Data Foundation integrates multiple storage types that you can use with the OpenShift image registry: Ceph, a shared and distributed file system and on-premises object storage NooBaa, providing a Multicloud Object Gateway This document outlines the procedure to configure the image registry to use Noobaa storage. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have access to the OpenShift Container Platform web console. You installed the oc CLI. You installed the OpenShift Data Foundation Operator to provide object storage and Noobaa object storage. Procedure Create the object bucket claim using the openshift-storage.noobaa.io storage class. For example: cat <<EOF | oc apply -f - apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: noobaatest namespace: openshift-storage 1 spec: storageClassName: openshift-storage.noobaa.io generateBucketName: noobaatest EOF 1 Alternatively, you can use the openshift-image-registry namespace. Get the bucket name by entering the following command: USD bucket_name=USD(oc get obc -n openshift-storage noobaatest -o jsonpath='{.spec.bucketName}') Get the AWS credentials by entering the following commands: USD AWS_ACCESS_KEY_ID=USD(oc get secret -n openshift-storage noobaatest -o yaml | grep -w "AWS_ACCESS_KEY_ID:" | head -n1 | awk '{print USD2}' | base64 --decode) USD AWS_SECRET_ACCESS_KEY=USD(oc get secret -n openshift-storage noobaatest -o yaml | grep -w "AWS_SECRET_ACCESS_KEY:" | head -n1 | awk '{print USD2}' | base64 --decode) Create the secret image-registry-private-configuration-user with the AWS credentials for the new bucket under openshift-image-registry project by entering the following command: USD oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=USD{AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=USD{AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry Get the route host by entering the following command: USD route_host=USD(oc get route s3 -n openshift-storage -o=jsonpath='{.spec.host}') Create a config map that uses an ingress certificate by entering the following commands: USD oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // "router-certs-default"' -r) -n openshift-ingress --confirm USD oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config Configure the image registry to use the Nooba object storage by entering the following command: USD oc patch config.image/cluster -p '{"spec":{"managementState":"Managed","replicas":2,"storage":{"managementState":"Unmanaged","s3":{"bucket":'\"USD{bucket_name}\"',"region":"us-east-1","regionEndpoint":'\"https://USD{route_host}\"',"virtualHostedStyle":false,"encrypt":false,"trustedCA":{"name":"image-registry-s3-bundle"}}}}}' --type=merge 3.9.4. Configuring the Image Registry Operator to use CephFS storage with Red Hat OpenShift Data Foundation Red Hat OpenShift Data Foundation integrates multiple storage types that you can use with the OpenShift image registry: Ceph, a shared and distributed file system and on-premises object storage NooBaa, providing a Multicloud Object Gateway This document outlines the procedure to configure the image registry to use CephFS storage. Note CephFS uses persistent volume claim (PVC) storage. It is not recommended to use PVCs for image registry storage if there are other options are available, such as Ceph RGW or Noobaa. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have access to the OpenShift Container Platform web console. You installed the oc CLI. You installed the OpenShift Data Foundation Operator to provide object storage and CephFS file storage. Procedure Create a PVC to use the cephfs storage class. For example: cat <<EOF | oc apply -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: registry-storage-pvc namespace: openshift-image-registry spec: accessModes: - ReadWriteMany resources: requests: storage: 100Gi storageClassName: ocs-storagecluster-cephfs EOF Configure the image registry to use the CephFS file system storage by entering the following command: USD oc patch config.image/cluster -p '{"spec":{"managementState":"Managed","replicas":2,"storage":{"managementState":"Unmanaged","pvc":{"claim":"registry-storage-pvc"}}}}' --type=merge 3.9.5. Additional resources Recommended configurable storage technology Configuring Image Registry to use OpenShift Data Foundation
[ "oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=myaccesskey --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=mysecretkey --namespace openshift-image-registry", "oc edit configs.imageregistry.operator.openshift.io/cluster", "storage: s3: bucket: <bucket-name> region: <region-name>", "regionEndpoint: http://rook-ceph-rgw-ocs-storagecluster-cephobjectstore.openshift-storage.svc.cluster.local", "oc create secret generic image-registry-private-configuration-user --from-file=REGISTRY_STORAGE_GCS_KEYFILE=<path_to_keyfile> --namespace openshift-image-registry", "oc edit configs.imageregistry.operator.openshift.io/cluster", "storage: gcs: bucket: <bucket-name> projectID: <project-id> region: <region-name>", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"disableRedirect\":true}}'", "oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_SWIFT_USERNAME=<username> --from-literal=REGISTRY_STORAGE_SWIFT_PASSWORD=<password> -n openshift-image-registry", "oc edit configs.imageregistry.operator.openshift.io/cluster", "storage: swift: container: <container-id>", "oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_AZURE_ACCOUNTKEY=<accountkey> --namespace openshift-image-registry", "oc edit configs.imageregistry.operator.openshift.io/cluster", "storage: azure: accountName: <storage-account-name> container: <container-name>", "oc edit configs.imageregistry.operator.openshift.io/cluster", "storage: azure: accountName: <storage-account-name> container: <container-name> cloudName: AzureUSGovernmentCloud 1", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: custom-csi-storageclass provisioner: cinder.csi.openstack.org volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true parameters: availability: <availability_zone_name>", "oc apply -f <storage_class_file_name>", "storageclass.storage.k8s.io/custom-csi-storageclass created", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: csi-pvc-imageregistry namespace: openshift-image-registry 1 annotations: imageregistry.openshift.io: \"true\" spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 100Gi 2 storageClassName: <your_custom_storage_class> 3", "oc apply -f <pvc_file_name>", "persistentvolumeclaim/csi-pvc-imageregistry created", "oc patch configs.imageregistry.operator.openshift.io/cluster --type 'json' -p='[{\"op\": \"replace\", \"path\": \"/spec/storage/pvc/claim\", \"value\": \"csi-pvc-imageregistry\"}]'", "config.imageregistry.operator.openshift.io/cluster patched", "oc get configs.imageregistry.operator.openshift.io/cluster -o yaml", "status: managementState: Managed pvc: claim: csi-pvc-imageregistry", "oc get pvc -n openshift-image-registry csi-pvc-imageregistry", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE csi-pvc-imageregistry Bound pvc-72a8f9c9-f462-11e8-b6b6-fa163e18b7b5 100Gi RWO custom-csi-storageclass 11m", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"managementState\":\"Managed\"}}'", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resources found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim:", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.15 True False False 6h50m", "oc edit configs.imageregistry/cluster", "managementState: Removed", "managementState: Managed", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4", "oc create -f pvc.yaml -n openshift-image-registry", "oc edit config.imageregistry.operator.openshift.io -o yaml", "storage: pvc: claim: 1", "cat <<EOF | oc apply -f - apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: rgwbucket namespace: openshift-storage 1 spec: storageClassName: ocs-storagecluster-ceph-rgw generateBucketName: rgwbucket EOF", "bucket_name=USD(oc get obc -n openshift-storage rgwbucket -o jsonpath='{.spec.bucketName}')", "AWS_ACCESS_KEY_ID=USD(oc get secret -n openshift-storage rgwbucket -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 --decode)", "AWS_SECRET_ACCESS_KEY=USD(oc get secret -n openshift-storage rgwbucket -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 --decode)", "oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=USD{AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=USD{AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry", "route_host=USD(oc get route ocs-storagecluster-cephobjectstore -n openshift-storage --template='{{ .spec.host }}')", "oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // \"router-certs-default\"' -r) -n openshift-ingress --confirm", "oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config", "oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"s3\":{\"bucket\":'\\\"USD{bucket_name}\\\"',\"region\":\"us-east-1\",\"regionEndpoint\":'\\\"https://USD{route_host}\\\"',\"virtualHostedStyle\":false,\"encrypt\":false,\"trustedCA\":{\"name\":\"image-registry-s3-bundle\"}}}}}' --type=merge", "cat <<EOF | oc apply -f - apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: noobaatest namespace: openshift-storage 1 spec: storageClassName: openshift-storage.noobaa.io generateBucketName: noobaatest EOF", "bucket_name=USD(oc get obc -n openshift-storage noobaatest -o jsonpath='{.spec.bucketName}')", "AWS_ACCESS_KEY_ID=USD(oc get secret -n openshift-storage noobaatest -o yaml | grep -w \"AWS_ACCESS_KEY_ID:\" | head -n1 | awk '{print USD2}' | base64 --decode)", "AWS_SECRET_ACCESS_KEY=USD(oc get secret -n openshift-storage noobaatest -o yaml | grep -w \"AWS_SECRET_ACCESS_KEY:\" | head -n1 | awk '{print USD2}' | base64 --decode)", "oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=USD{AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=USD{AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry", "route_host=USD(oc get route s3 -n openshift-storage -o=jsonpath='{.spec.host}')", "oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // \"router-certs-default\"' -r) -n openshift-ingress --confirm", "oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config", "oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"s3\":{\"bucket\":'\\\"USD{bucket_name}\\\"',\"region\":\"us-east-1\",\"regionEndpoint\":'\\\"https://USD{route_host}\\\"',\"virtualHostedStyle\":false,\"encrypt\":false,\"trustedCA\":{\"name\":\"image-registry-s3-bundle\"}}}}}' --type=merge", "cat <<EOF | oc apply -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: registry-storage-pvc namespace: openshift-image-registry spec: accessModes: - ReadWriteMany resources: requests: storage: 100Gi storageClassName: ocs-storagecluster-cephfs EOF", "oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"pvc\":{\"claim\":\"registry-storage-pvc\"}}}}' --type=merge", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"managementState\":\"Managed\"}}'", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resourses found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim: 1", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4", "oc create -f pvc.yaml -n openshift-image-registry", "oc edit config.imageregistry.operator.openshift.io -o yaml", "storage: pvc: claim: 1", "cat <<EOF | oc apply -f - apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: rgwbucket namespace: openshift-storage 1 spec: storageClassName: ocs-storagecluster-ceph-rgw generateBucketName: rgwbucket EOF", "bucket_name=USD(oc get obc -n openshift-storage rgwbucket -o jsonpath='{.spec.bucketName}')", "AWS_ACCESS_KEY_ID=USD(oc get secret -n openshift-storage rgwbucket -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 --decode)", "AWS_SECRET_ACCESS_KEY=USD(oc get secret -n openshift-storage rgwbucket -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 --decode)", "oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=USD{AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=USD{AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry", "route_host=USD(oc get route ocs-storagecluster-cephobjectstore -n openshift-storage --template='{{ .spec.host }}')", "oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // \"router-certs-default\"' -r) -n openshift-ingress --confirm", "oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config", "oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"s3\":{\"bucket\":'\\\"USD{bucket_name}\\\"',\"region\":\"us-east-1\",\"regionEndpoint\":'\\\"https://USD{route_host}\\\"',\"virtualHostedStyle\":false,\"encrypt\":false,\"trustedCA\":{\"name\":\"image-registry-s3-bundle\"}}}}}' --type=merge", "cat <<EOF | oc apply -f - apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: noobaatest namespace: openshift-storage 1 spec: storageClassName: openshift-storage.noobaa.io generateBucketName: noobaatest EOF", "bucket_name=USD(oc get obc -n openshift-storage noobaatest -o jsonpath='{.spec.bucketName}')", "AWS_ACCESS_KEY_ID=USD(oc get secret -n openshift-storage noobaatest -o yaml | grep -w \"AWS_ACCESS_KEY_ID:\" | head -n1 | awk '{print USD2}' | base64 --decode)", "AWS_SECRET_ACCESS_KEY=USD(oc get secret -n openshift-storage noobaatest -o yaml | grep -w \"AWS_SECRET_ACCESS_KEY:\" | head -n1 | awk '{print USD2}' | base64 --decode)", "oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=USD{AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=USD{AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry", "route_host=USD(oc get route s3 -n openshift-storage -o=jsonpath='{.spec.host}')", "oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // \"router-certs-default\"' -r) -n openshift-ingress --confirm", "oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config", "oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"s3\":{\"bucket\":'\\\"USD{bucket_name}\\\"',\"region\":\"us-east-1\",\"regionEndpoint\":'\\\"https://USD{route_host}\\\"',\"virtualHostedStyle\":false,\"encrypt\":false,\"trustedCA\":{\"name\":\"image-registry-s3-bundle\"}}}}}' --type=merge", "cat <<EOF | oc apply -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: registry-storage-pvc namespace: openshift-image-registry spec: accessModes: - ReadWriteMany resources: requests: storage: 100Gi storageClassName: ocs-storagecluster-cephfs EOF", "oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"pvc\":{\"claim\":\"registry-storage-pvc\"}}}}' --type=merge", "cat <<EOF | oc apply -f - apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: rgwbucket namespace: openshift-storage 1 spec: storageClassName: ocs-storagecluster-ceph-rgw generateBucketName: rgwbucket EOF", "bucket_name=USD(oc get obc -n openshift-storage rgwbucket -o jsonpath='{.spec.bucketName}')", "AWS_ACCESS_KEY_ID=USD(oc get secret -n openshift-storage rgwbucket -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 --decode)", "AWS_SECRET_ACCESS_KEY=USD(oc get secret -n openshift-storage rgwbucket -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 --decode)", "oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=USD{AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=USD{AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry", "route_host=USD(oc get route ocs-storagecluster-cephobjectstore -n openshift-storage --template='{{ .spec.host }}')", "oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // \"router-certs-default\"' -r) -n openshift-ingress --confirm", "oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config", "oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"s3\":{\"bucket\":'\\\"USD{bucket_name}\\\"',\"region\":\"us-east-1\",\"regionEndpoint\":'\\\"https://USD{route_host}\\\"',\"virtualHostedStyle\":false,\"encrypt\":false,\"trustedCA\":{\"name\":\"image-registry-s3-bundle\"}}}}}' --type=merge", "cat <<EOF | oc apply -f - apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: noobaatest namespace: openshift-storage 1 spec: storageClassName: openshift-storage.noobaa.io generateBucketName: noobaatest EOF", "bucket_name=USD(oc get obc -n openshift-storage noobaatest -o jsonpath='{.spec.bucketName}')", "AWS_ACCESS_KEY_ID=USD(oc get secret -n openshift-storage noobaatest -o yaml | grep -w \"AWS_ACCESS_KEY_ID:\" | head -n1 | awk '{print USD2}' | base64 --decode)", "AWS_SECRET_ACCESS_KEY=USD(oc get secret -n openshift-storage noobaatest -o yaml | grep -w \"AWS_SECRET_ACCESS_KEY:\" | head -n1 | awk '{print USD2}' | base64 --decode)", "oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=USD{AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=USD{AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry", "route_host=USD(oc get route s3 -n openshift-storage -o=jsonpath='{.spec.host}')", "oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // \"router-certs-default\"' -r) -n openshift-ingress --confirm", "oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config", "oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"s3\":{\"bucket\":'\\\"USD{bucket_name}\\\"',\"region\":\"us-east-1\",\"regionEndpoint\":'\\\"https://USD{route_host}\\\"',\"virtualHostedStyle\":false,\"encrypt\":false,\"trustedCA\":{\"name\":\"image-registry-s3-bundle\"}}}}}' --type=merge", "cat <<EOF | oc apply -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: registry-storage-pvc namespace: openshift-image-registry spec: accessModes: - ReadWriteMany resources: requests: storage: 100Gi storageClassName: ocs-storagecluster-cephfs EOF", "oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"pvc\":{\"claim\":\"registry-storage-pvc\"}}}}' --type=merge", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"managementState\":\"Managed\"}}'", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resourses found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim: 1", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.13 True False False 6h50m", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4", "oc create -f pvc.yaml -n openshift-image-registry", "oc edit config.imageregistry.operator.openshift.io -o yaml", "storage: pvc: claim: 1", "cat <<EOF | oc apply -f - apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: rgwbucket namespace: openshift-storage 1 spec: storageClassName: ocs-storagecluster-ceph-rgw generateBucketName: rgwbucket EOF", "bucket_name=USD(oc get obc -n openshift-storage rgwbucket -o jsonpath='{.spec.bucketName}')", "AWS_ACCESS_KEY_ID=USD(oc get secret -n openshift-storage rgwbucket -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 --decode)", "AWS_SECRET_ACCESS_KEY=USD(oc get secret -n openshift-storage rgwbucket -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 --decode)", "oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=USD{AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=USD{AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry", "route_host=USD(oc get route ocs-storagecluster-cephobjectstore -n openshift-storage --template='{{ .spec.host }}')", "oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // \"router-certs-default\"' -r) -n openshift-ingress --confirm", "oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config", "oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"s3\":{\"bucket\":'\\\"USD{bucket_name}\\\"',\"region\":\"us-east-1\",\"regionEndpoint\":'\\\"https://USD{route_host}\\\"',\"virtualHostedStyle\":false,\"encrypt\":false,\"trustedCA\":{\"name\":\"image-registry-s3-bundle\"}}}}}' --type=merge", "cat <<EOF | oc apply -f - apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: noobaatest namespace: openshift-storage 1 spec: storageClassName: openshift-storage.noobaa.io generateBucketName: noobaatest EOF", "bucket_name=USD(oc get obc -n openshift-storage noobaatest -o jsonpath='{.spec.bucketName}')", "AWS_ACCESS_KEY_ID=USD(oc get secret -n openshift-storage noobaatest -o yaml | grep -w \"AWS_ACCESS_KEY_ID:\" | head -n1 | awk '{print USD2}' | base64 --decode)", "AWS_SECRET_ACCESS_KEY=USD(oc get secret -n openshift-storage noobaatest -o yaml | grep -w \"AWS_SECRET_ACCESS_KEY:\" | head -n1 | awk '{print USD2}' | base64 --decode)", "oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=USD{AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=USD{AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry", "route_host=USD(oc get route s3 -n openshift-storage -o=jsonpath='{.spec.host}')", "oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // \"router-certs-default\"' -r) -n openshift-ingress --confirm", "oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config", "oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"s3\":{\"bucket\":'\\\"USD{bucket_name}\\\"',\"region\":\"us-east-1\",\"regionEndpoint\":'\\\"https://USD{route_host}\\\"',\"virtualHostedStyle\":false,\"encrypt\":false,\"trustedCA\":{\"name\":\"image-registry-s3-bundle\"}}}}}' --type=merge", "cat <<EOF | oc apply -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: registry-storage-pvc namespace: openshift-image-registry spec: accessModes: - ReadWriteMany resources: requests: storage: 100Gi storageClassName: ocs-storagecluster-cephfs EOF", "oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"pvc\":{\"claim\":\"registry-storage-pvc\"}}}}' --type=merge" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/registry/setting-up-and-configuring-the-registry
Chapter 2. Installing a cluster with z/VM on IBM Z and IBM LinuxONE
Chapter 2. Installing a cluster with z/VM on IBM Z and IBM LinuxONE In OpenShift Container Platform version 4.14, you can install a cluster on IBM Z(R) or IBM(R) LinuxONE infrastructure that you provision. Note While this document refers only to IBM Z(R), all information in it also applies to IBM(R) LinuxONE. Important Additional considerations exist for non-bare metal platforms. Review the information in the guidelines for deploying OpenShift Container Platform on non-tested platforms before you install an OpenShift Container Platform cluster. 2.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . Before you begin the installation process, you must clean the installation directory. This ensures that the required installation files are created and updated during the installation process. You provisioned persistent storage using OpenShift Data Foundation or other supported storage protocols for your cluster. To deploy a private image registry, you must set up persistent storage with ReadWriteMany access. If you use a firewall, you configured it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 2.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.14, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 2.3. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 2.3.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 2.1. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To improve high availability of your cluster, distribute the control plane machines over different z/VM instances on at least two physical machines. The bootstrap, control plane, and compute machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 9.2 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 2.3.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 2.2. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) Bootstrap RHCOS 4 16 GB 100 GB N/A Control plane RHCOS 4 16 GB 100 GB N/A Compute RHCOS 2 8 GB 100 GB N/A One physical core (IFL) provides two logical cores (threads) when SMT-2 is enabled. The hypervisor can provide two or more vCPUs. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see RHEL Architectures . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 2.3.3. Minimum IBM Z system environment You can install OpenShift Container Platform version 4.14 on the following IBM(R) hardware: IBM(R) z16 (all models), IBM(R) z15 (all models), IBM(R) z14 (all models) IBM(R) LinuxONE 4 (all models), IBM(R) LinuxONE III (all models), IBM(R) LinuxONE Emperor II, IBM(R) LinuxONE Rockhopper II Hardware requirements The equivalent of six Integrated Facilities for Linux (IFL), which are SMT2 enabled, for each cluster. At least one network connection to both connect to the LoadBalancer service and to serve data for traffic outside the cluster. Note You can use dedicated or shared IFLs to assign sufficient compute resources. Resource sharing is one of the key strengths of IBM Z(R). However, you must adjust capacity correctly on each hypervisor layer and ensure sufficient resources for every OpenShift Container Platform cluster. Important Since the overall performance of the cluster can be impacted, the LPARs that are used to set up the OpenShift Container Platform clusters must provide sufficient compute capacity. In this context, LPAR weight management, entitlements, and CPU shares on the hypervisor level play an important role. Operating system requirements One instance of z/VM 7.2 or later On your z/VM instance, set up: Three guest virtual machines for OpenShift Container Platform control plane machines Two guest virtual machines for OpenShift Container Platform compute machines One guest virtual machine for the temporary OpenShift Container Platform bootstrap machine IBM Z network connectivity requirements To install on IBM Z(R) under z/VM, you require a single z/VM virtual NIC in layer 2 mode. You also need: A direct-attached OSA or RoCE network adapter A z/VM VSWITCH in layer 2 Ethernet mode set up Disk storage for the z/VM guest virtual machines FICON attached disk storage (DASDs). These can be z/VM minidisks, fullpack minidisks, or dedicated DASDs, all of which must be formatted as CDL, which is the default. To reach the minimum required DASD size for Red Hat Enterprise Linux CoreOS (RHCOS) installations, you need extended address volumes (EAV). If available, use HyperPAV to ensure optimal performance. FCP attached disk storage Storage / Main Memory 16 GB for OpenShift Container Platform control plane machines 8 GB for OpenShift Container Platform compute machines 16 GB for the temporary OpenShift Container Platform bootstrap machine 2.3.4. Preferred IBM Z system environment Hardware requirements Three LPARS that each have the equivalent of six IFLs, which are SMT2 enabled, for each cluster. Two network connections to both connect to the LoadBalancer service and to serve data for traffic outside the cluster. HiperSockets, which are attached to a node either directly as a device or by bridging with one z/VM VSWITCH to be transparent to the z/VM guest. To directly connect HiperSockets to a node, you must set up a gateway to the external network via a RHEL 8 guest to bridge to the HiperSockets network. Operating system requirements Two or three instances of z/VM 7.2 or later for high availability On your z/VM instances, set up: Three guest virtual machines for OpenShift Container Platform control plane machines, one per z/VM instance. At least six guest virtual machines for OpenShift Container Platform compute machines, distributed across the z/VM instances. One guest virtual machine for the temporary OpenShift Container Platform bootstrap machine. To ensure the availability of integral components in an overcommitted environment, increase the priority of the control plane by using the CP command SET SHARE . Do the same for infrastructure nodes, if they exist. See SET SHARE (IBM(R) Documentation). IBM Z network connectivity requirements To install on IBM Z(R) under z/VM, you require a single z/VM virtual NIC in layer 2 mode. You also need: A direct-attached OSA or RoCE network adapter A z/VM VSWITCH in layer 2 Ethernet mode set up Disk storage for the z/VM guest virtual machines FICON attached disk storage (DASDs). These can be z/VM minidisks, fullpack minidisks, or dedicated DASDs, all of which must be formatted as CDL, which is the default. To reach the minimum required DASD size for Red Hat Enterprise Linux CoreOS (RHCOS) installations, you need extended address volumes (EAV). If available, use HyperPAV and High Performance FICON (zHPF) to ensure optimal performance. FCP attached disk storage Storage / Main Memory 16 GB for OpenShift Container Platform control plane machines 8 GB for OpenShift Container Platform compute machines 16 GB for the temporary OpenShift Container Platform bootstrap machine 2.3.5. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. Additional resources See Bridging a HiperSockets LAN with a z/VM Virtual Switch in IBM(R) Documentation. See Scaling HyperPAV alias devices on Linux guests on z/VM for performance optimization. See Topics in LPAR performance for LPAR weight management and entitlements. Recommended host practices for IBM Z(R) & IBM(R) LinuxONE environments 2.3.6. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. During the initial boot, the machines require an HTTP or HTTPS server to establish a network connection to download their Ignition config files. The machines are configured with static IP addresses. No DHCP server is required. Ensure that the machines have persistent IP addresses and hostnames. The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests. 2.3.6.1. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Important In connected OpenShift Container Platform environments, all nodes are required to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Table 2.3. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If an external NTP time server is configured, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 2.4. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 2.5. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports NTP configuration for user-provisioned infrastructure OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service . Additional resources Configuring chrony time service 2.3.7. User-provisioned DNS requirements In OpenShift Container Platform deployments, DNS name resolution is required for the following components: The Kubernetes API The OpenShift Container Platform application wildcard The bootstrap, control plane, and compute machines Reverse DNS resolution is also required for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are provided by DHCP. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate. The following DNS records are required for a user-provisioned OpenShift Container Platform cluster and they must be in place before installation. In each record, <cluster_name> is the cluster name and <base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 2.6. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. api-int.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to internally identify the API load balancer. These records must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the hostnames that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods. Routes *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that refers to the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example, console-openshift-console.apps.<cluster_name>.<base_domain> is used as a wildcard route to the OpenShift Container Platform console. Bootstrap machine bootstrap.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the bootstrap machine. These records must be resolvable by the nodes within the cluster. Control plane machines <control_plane><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes. These records must be resolvable by the nodes within the cluster. Compute machines <compute><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. Note In OpenShift Container Platform 4.4 and later, you do not need to specify etcd host and SRV records in your DNS configuration. Tip You can use the dig command to verify name and reverse name resolution. See the section on Validating DNS resolution for user-provisioned infrastructure for detailed validation steps. 2.3.7.1. Example DNS configuration for user-provisioned clusters This section provides A and PTR record configuration samples that meet the DNS requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. The samples are not meant to provide advice for choosing one DNS solution over another. In the examples, the cluster name is ocp4 and the base domain is example.com . Example DNS A record configuration for a user-provisioned cluster The following example is a BIND zone file that shows sample A records for name resolution in a user-provisioned cluster. Example 2.1. Sample DNS zone database USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF 1 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer. 2 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications. 3 Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. 4 Provides name resolution for the bootstrap machine. 5 6 7 Provides name resolution for the control plane machines. 8 9 Provides name resolution for the compute machines. Example DNS PTR record configuration for a user-provisioned cluster The following example BIND zone file shows sample PTR records for reverse name resolution in a user-provisioned cluster. Example 2.2. Sample DNS zone database for reverse records USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF 1 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer. 2 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications. 3 Provides reverse DNS resolution for the bootstrap machine. 4 5 6 Provides reverse DNS resolution for the control plane machines. 7 8 Provides reverse DNS resolution for the compute machines. Note A PTR record is not required for the OpenShift Container Platform application wildcard. 2.3.8. Load balancing requirements for user-provisioned infrastructure Before you install OpenShift Container Platform, you must provision the API and application Ingress load balancing infrastructure. In production scenarios, you can deploy the API and application Ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you want to deploy the API and application Ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configuring session persistence for a Kubernetes API server might cause performance issues from excess application traffic for your OpenShift Container Platform cluster and the Kubernetes API that runs inside the cluster. Configure the following ports on both the front and back of the load balancers: Table 2.7. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 2.8. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. 2.3.8.1. Example load balancer configuration for user-provisioned clusters This section provides an example API and application Ingress load balancer configuration that meets the load balancing requirements for user-provisioned clusters. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . Example 2.3. Sample API and application Ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s 1 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 2 4 The bootstrap entries must be in place before the OpenShift Container Platform cluster installation and they must be removed after the bootstrap process is complete. 3 Port 22623 handles the machine config server traffic and points to the control plane machines. 5 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 6 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. 2.4. Preparing the user-provisioned infrastructure Before you install OpenShift Container Platform on user-provisioned infrastructure, you must prepare the underlying infrastructure. This section provides details about the high-level steps required to set up your cluster infrastructure in preparation for an OpenShift Container Platform installation. This includes configuring IP networking and network connectivity for your cluster nodes, preparing a web server for the Ignition files, enabling the required ports through your firewall, and setting up the required DNS and load balancing infrastructure. After preparation, your cluster infrastructure must meet the requirements outlined in the Requirements for a cluster with user-provisioned infrastructure section. Prerequisites You have reviewed the OpenShift Container Platform 4.x Tested Integrations page. You have reviewed the infrastructure requirements detailed in the Requirements for a cluster with user-provisioned infrastructure section. Procedure Set up static IP addresses. Set up an HTTP or HTTPS server to provide Ignition files to the cluster nodes. Ensure that your network infrastructure provides the required network connectivity between the cluster components. See the Networking requirements for user-provisioned infrastructure section for details about the requirements. Configure your firewall to enable the ports required for the OpenShift Container Platform cluster components to communicate. See Networking requirements for user-provisioned infrastructure section for details about the ports that are required. Important By default, port 1936 is accessible for an OpenShift Container Platform cluster, because each control plane node needs access to this port. Avoid using the Ingress load balancer to expose this port, because doing so might result in the exposure of sensitive information, such as statistics and metrics, related to Ingress Controllers. Setup the required DNS infrastructure for your cluster. Configure DNS name resolution for the Kubernetes API, the application wildcard, the bootstrap machine, the control plane machines, and the compute machines. Configure reverse DNS resolution for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. See the User-provisioned DNS requirements section for more information about the OpenShift Container Platform DNS requirements. Validate your DNS configuration. From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses in the responses correspond to the correct components. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names in the responses correspond to the correct components. See the Validating DNS resolution for user-provisioned infrastructure section for detailed DNS validation steps. Provision the required API and application ingress load balancing infrastructure. See the Load balancing requirements for user-provisioned infrastructure section for more information about the requirements. Note Some load balancing solutions require the DNS name resolution for the cluster nodes to be in place before the load balancing is initialized. 2.5. Validating DNS resolution for user-provisioned infrastructure You can validate your DNS configuration before installing OpenShift Container Platform on user-provisioned infrastructure. Important The validation steps detailed in this section must succeed before you install your cluster. Prerequisites You have configured the required DNS records for your user-provisioned infrastructure. Procedure From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses contained in the responses correspond to the correct components. Perform a lookup against the Kubernetes API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1 1 Replace <nameserver_ip> with the IP address of the nameserver, <cluster_name> with your cluster name, and <base_domain> with your base domain name. Example output api.ocp4.example.com. 604800 IN A 192.168.1.5 Perform a lookup against the Kubernetes internal API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain> Example output api-int.ocp4.example.com. 604800 IN A 192.168.1.5 Test an example *.apps.<cluster_name>.<base_domain> DNS wildcard lookup. All of the application wildcard lookups must resolve to the IP address of the application ingress load balancer: USD dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain> Example output random.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Note In the example outputs, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. You can replace random with another wildcard value. For example, you can query the route to the OpenShift Container Platform console: USD dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain> Example output console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Run a lookup against the bootstrap DNS record name. Check that the result points to the IP address of the bootstrap node: USD dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain> Example output bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96 Use this method to perform lookups against the DNS record names for the control plane and compute nodes. Check that the results correspond to the IP addresses of each node. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names contained in the responses correspond to the correct components. Perform a reverse lookup against the IP address of the API load balancer. Check that the response includes the record names for the Kubernetes API and the Kubernetes internal API: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.5 Example output 5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2 1 Provides the record name for the Kubernetes internal API. 2 Provides the record name for the Kubernetes API. Note A PTR record is not required for the OpenShift Container Platform application wildcard. No validation step is needed for reverse DNS resolution against the IP address of the application ingress load balancer. Perform a reverse lookup against the IP address of the bootstrap node. Check that the result points to the DNS record name of the bootstrap node: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.96 Example output 96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com. Use this method to perform reverse lookups against the IP addresses for the control plane and compute nodes. Check that the results correspond to the DNS record names of each node. 2.6. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 2.7. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on your provisioning machine. Prerequisites You have a machine that runs Linux, for example Red Hat Enterprise Linux (RHEL) 8, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 2.8. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.14. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.14 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 2.9. Manually creating the installation configuration file Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for IBM Z(R) 2.9.1. Sample install-config.yaml file for IBM Z You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{"auths": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 6 Specifies whether to enable or disable simultaneous multithreading (SMT), or hyperthreading. By default, SMT is enabled to increase the performance of the cores in your machines. You can disable it by setting the parameter value to Disabled . If you disable SMT, you must disable it in all cluster machines; this includes both control plane and compute machines. Note Simultaneous multithreading (SMT) is enabled by default. If SMT is not available on your OpenShift Container Platform nodes, the hyperthreading parameter has no effect. Important If you disable hyperthreading , whether on your OpenShift Container Platform nodes or in the install-config.yaml file, ensure that your capacity planning accounts for the dramatically decreased machine performance. 4 You must set this value to 0 when you install OpenShift Container Platform on user-provisioned infrastructure. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. In user-provisioned installations, you must manually deploy the compute machines before you finish installing the cluster. Note If you are installing a three-node cluster, do not deploy any compute machines when you install the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 7 The number of control plane machines that you add to the cluster. Because the cluster uses these values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 8 The cluster name that you specified in your DNS records. 9 A block of IP addresses from which pod IP addresses are allocated. This block must not overlap with existing physical networks. These IP addresses are used for the pod network. If you need to access the pods from an external network, you must configure load balancers and routers to manage the traffic. Note Class E CIDR range is reserved for a future use. To use the Class E CIDR range, you must ensure your networking environment accepts the IP addresses within the Class E CIDR range. 10 The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 , then each node is assigned a /23 subnet out of the given cidr , which allows for 510 (2^(32 - 23) - 2) pod IP addresses. If you are required to provide access to nodes from an external network, configure load balancers and routers to manage the traffic. 11 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 12 The IP address pool to use for service IP addresses. You can enter only one IP address pool. This block must not overlap with existing physical networks. If you need to access the services from an external network, configure load balancers and routers to manage the traffic. 13 You must set the platform to none . You cannot provide additional platform configuration variables for IBM Z(R) infrastructure. Important Clusters that are installed with the platform type none are unable to use some features, such as managing compute machines with the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that would normally support the feature. This parameter cannot be changed after installation. 14 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 15 The pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 16 The SSH public key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 2.9.2. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 2.9.3. Configuring a three-node cluster Optionally, you can deploy zero compute machines in a minimal three node cluster that consists of three control plane machines only. This provides smaller, more resource efficient clusters for cluster administrators and developers to use for testing, development, and production. In three-node OpenShift Container Platform environments, the three control plane machines are schedulable, which means that your application workloads are scheduled to run on them. Prerequisites You have an existing install-config.yaml file. Procedure Ensure that the number of compute replicas is set to 0 in your install-config.yaml file, as shown in the following compute stanza: compute: - name: worker platform: {} replicas: 0 Note You must set the value of the replicas parameter for the compute machines to 0 when you install OpenShift Container Platform on user-provisioned infrastructure, regardless of the number of compute machines you are deploying. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. This does not apply to user-provisioned installations, where the compute machines are deployed manually. Note The preferred resource for control plane nodes is six vCPUs and 21 GB. For three control plane nodes this is the memory + vCPU equivalent of a minimum five-node cluster. You should back the three nodes, each installed on a 120 GB disk, with three IFLs that are SMT2 enabled. The minimum tested setup is three vCPUs and 10 GB on a 120 GB disk for each control plane node. For three-node cluster installations, follow these steps: If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. See the Load balancing requirements for user-provisioned infrastructure section for more information. When you create the Kubernetes manifest files in the following procedure, ensure that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml file is set to true . This enables your application workloads to run on the control plane nodes. Do not deploy any compute nodes when you create the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 2.10. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network plugin, such as OpenShift SDN or OVN-Kubernetes. You can specify the cluster network plugin configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 2.10.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 2.9. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 spec.serviceNetwork array A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the network plugin for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network plugin, the kube-proxy configuration has no effect. Important For a cluster that needs to deploy objects across multiple networks, ensure that you specify the same value for the clusterNetwork.hostPrefix parameter for each network type that is defined in the install-config.yaml file. Setting a different value for each clusterNetwork.hostPrefix parameter can impact the OVN-Kubernetes network plugin, where the plugin cannot effectively route object traffic among different nodes. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 2.10. defaultNetwork object Field Type Description type string Either OpenShiftSDN or OVNKubernetes . The Red Hat OpenShift Networking network plugin is selected during installation. You can change this value by migrating from OpenShift SDN to OVN-Kubernetes. Note OpenShift Container Platform uses the OVN-Kubernetes network plugin by default. openshiftSDNConfig object This object is only valid for the OpenShift SDN network plugin. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes network plugin. Configuration for the OpenShift SDN network plugin The following table describes the configuration fields for the OpenShift SDN network plugin: Table 2.11. openshiftSDNConfig object Field Type Description mode string Configures the network isolation mode for OpenShift SDN. The default value is NetworkPolicy . The values Multitenant and Subnet are available for backwards compatibility with OpenShift Container Platform 3.x but are not recommended. This value cannot be changed after cluster installation. mtu integer The maximum transmission unit (MTU) for the VXLAN overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 50 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1450 . This value cannot be changed after cluster installation. vxlanPort integer The port to use for all VXLAN packets. The default value is 4789 . This value cannot be changed after cluster installation. If you are running in a virtualized environment with existing nodes that are part of another VXLAN network, then you might be required to change this. For example, when running an OpenShift SDN overlay on top of VMware NSX-T, you must select an alternate port for the VXLAN, because both SDNs use the same default VXLAN port number. On Amazon Web Services (AWS), you can select an alternate port for the VXLAN between port 9000 and port 9999 . Example OpenShift SDN configuration defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789 Configuration for the OVN-Kubernetes network plugin The following table describes the configuration fields for the OVN-Kubernetes network plugin: Table 2.12. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify an empty object to enable IPsec encryption. ipv4 object Specifies a configuration object for IPv4 settings. ipv6 object Specifies a configuration object for IPv6 settings. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. gatewayConfig object Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Note While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes. Table 2.13. ovnKubernetesConfig.ipv4 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the 100.88.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. The subnet for the distributed transit switch that enables east-west traffic. This subnet cannot overlap with any other subnets used by OVN-Kubernetes or on the host itself. It must be large enough to accommodate one IP address per node in your cluster. The default value is 100.88.0.0/16 . internalJoinSubnet string If your existing network infrastructure overlaps with the 100.64.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. For example, if the clusterNetwork.cidr value is 10.128.0.0/14 and the clusterNetwork.hostPrefix value is /23 , then the maximum number of nodes is 2^(23-14)=512 . The default value is 100.64.0.0/16 . Table 2.14. ovnKubernetesConfig.ipv6 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the fd98::/48 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. This field cannot be changed after installation. The default value is fd98::/48 . internalJoinSubnet string If your existing network infrastructure overlaps with the fd98::/64 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. The default value is fd98::/64 . Table 2.15. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. maxLogFiles integer The maximum number of log files that are retained. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Table 2.16. gatewayConfig object Field Type Description routingViaHost boolean Set this field to true to send egress traffic from pods to the host networking stack. For highly-specialized installations and applications that rely on manually configured routes in the kernel routing table, you might want to route egress traffic to the host networking stack. By default, egress traffic is processed in OVN to exit the cluster and is not affected by specialized routes in the kernel routing table. The default value is false . This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to true , you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack. ipForwarding object You can control IP forwarding for all traffic on OVN-Kubernetes managed interfaces by using the ipForwarding specification in the Network resource. Specify Restricted to only allow IP forwarding for Kubernetes related traffic. Specify Global to allow forwarding of all IP traffic. For new installations, the default is Restricted . For updates to OpenShift Container Platform 4.14, the default is Global . ipv4 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv4 addresses. ipv6 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv6 addresses. Table 2.17. gatewayConfig.ipv4 object Field Type Description internalMasqueradeSubnet string The masquerade IPv4 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is 169.254.169.0/29 . Table 2.18. gatewayConfig.ipv6 object Field Type Description internalMasqueradeSubnet string The masquerade IPv6 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is fd69::/125 . Example OVN-Kubernetes configuration with IPSec enabled defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {} kubeProxyConfig object configuration (OpenShiftSDN container network interface only) The values for the kubeProxyConfig object are defined in the following table: Table 2.19. kubeProxyConfig object Field Type Description iptablesSyncPeriod string The refresh period for iptables rules. The default value is 30s . Valid suffixes include s , m , and h and are described in the Go time package documentation. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. proxyArguments.iptables-min-sync-period array The minimum duration before refreshing iptables rules. This field ensures that the refresh does not happen too frequently. Valid suffixes include s , m , and h and are described in the Go time package . The default value is: kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s 2.11. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Note The installation program that generates the manifest and Ignition files is architecture specific and can be obtained from the client image mirror . The Linux version of the installation program runs on s390x only. This installer program is also available as a Mac OS version. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Warning If you are installing a three-node cluster, skip the following step to allow the control plane nodes to be schedulable. Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become compute nodes. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: 2.12. Configuring NBDE with static IP in an IBM Z or IBM LinuxONE environment Enabling NBDE disk encryption in an IBM Z(R) or IBM(R) LinuxONE environment requires additional steps, which are described in detail in this section. Prerequisites You have set up the External Tang Server. See Network-bound disk encryption for instructions. You have installed the butane utility. You have reviewed the instructions for how to create machine configs with Butane. Procedure Create Butane configuration files for the control plane and compute nodes. The following example of a Butane configuration for a control plane node creates a file named master-storage.bu for disk encryption: variant: openshift version: 4.14.0 metadata: name: master-storage labels: machineconfiguration.openshift.io/role: master storage: luks: - clevis: tang: - thumbprint: QcPr_NHFJammnRCA3fFMVdNBwjs url: http://clevis.example.com:7500 options: 1 - --cipher - aes-cbc-essiv:sha256 device: /dev/disk/by-partlabel/root 2 label: luks-root name: root wipe_volume: true filesystems: - device: /dev/mapper/root format: xfs label: root wipe_filesystem: true openshift: fips: true 3 1 The cipher option is only required if FIPS mode is enabled. Omit the entry if FIPS is disabled. 2 For installations on DASD-type disks, replace with device: /dev/disk/by-label/root . 3 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Create a customized initramfs file to boot the machine, by running the following command: USD coreos-installer pxe customize \ /root/rhcos-bootfiles/rhcos-<release>-live-initramfs.s390x.img \ --dest-device /dev/disk/by-id/scsi-<serial-number> --dest-karg-append \ ip=<ip-address>::<gateway-ip>:<subnet-mask>::<network-device>:none \ --dest-karg-append nameserver=<nameserver-ip> \ --dest-karg-append rd.neednet=1 -o \ /root/rhcos-bootfiles/<Node-name>-initramfs.s390x.img Note Before first boot, you must customize the initramfs for each node in the cluster, and add PXE kernel parameters. Create a parameter file that includes ignition.platform.id=metal and ignition.firstboot . Example kernel parameter file for the control plane machine: rd.neednet=1 \ console=ttysclp0 \ coreos.inst.install_dev=/dev/dasda \ 1 ignition.firstboot ignition.platform.id=metal \ coreos.live.rootfs_url=http://10.19.17.25/redhat/ocp/rhcos-413.86.202302201445-0/rhcos-413.86.202302201445-0-live-rootfs.s390x.img \ coreos.inst.ignition_url=http://bastion.ocp-cluster1.example.com:8080/ignition/master.ign \ ip=10.19.17.2::10.19.17.1:255.255.255.0::enbdd0:none nameserver=10.19.17.1 \ zfcp.allow_lun_scan=0 \ 2 rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 \ rd.zfcp=0.0.5677,0x600606680g7f0056,0x034F000000000000 \ 3 zfcp.allow_lun_scan=0 \ rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 \ rd.zfcp=0.0.5677,0x600606680g7f0056,0x034F000000000000 1 For installations on DASD-type disks, add coreos.inst.install_dev=/dev/dasda . Omit this value for FCP-type disks. 2 For installations on FCP-type disks, add zfcp.allow_lun_scan=0 . Omit this value for DASD-type disks. 3 For installations on DASD-type disks, replace with rd.dasd=0.0.3490 to specify the DASD device. Note Write all options in the parameter file as a single line and make sure you have no newline characters. Additional resources Creating machine configs with Butane 2.13. Installing RHCOS and starting the OpenShift Container Platform bootstrap process To install OpenShift Container Platform on IBM Z(R) infrastructure that you provision, you must install Red Hat Enterprise Linux CoreOS (RHCOS) on z/VM guest virtual machines. When you install RHCOS, you must provide the Ignition config file that was generated by the OpenShift Container Platform installation program for the type of machine you are installing. If you have configured suitable networking, DNS, and load balancing infrastructure, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS z/VM guest virtual machines have rebooted. Complete the following steps to create the machines. Prerequisites An HTTP or HTTPS server running on your provisioning machine that is accessible to the machines you create. Procedure Log in to Linux on your provisioning machine. Obtain the Red Hat Enterprise Linux CoreOS (RHCOS) kernel, initramfs, and rootfs files from the RHCOS image mirror . Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Only use the appropriate kernel, initramfs, and rootfs artifacts described in the following procedure. The file names contain the OpenShift Container Platform version number. They resemble the following examples: kernel: rhcos-<version>-live-kernel-<architecture> initramfs: rhcos-<version>-live-initramfs.<architecture>.img rootfs: rhcos-<version>-live-rootfs.<architecture>.img Note The rootfs image is the same for FCP and DASD. Create parameter files. The following parameters are specific for a particular virtual machine: For ip= , specify the following seven entries: The IP address for the machine. An empty string. The gateway. The netmask. The machine host and domain name in the form hostname.domainname . Omit this value to let RHCOS decide. The network interface name. Omit this value to let RHCOS decide. If you use static IP addresses, specify none . For coreos.inst.ignition_url= , specify the Ignition file for the machine role. Use bootstrap.ign , master.ign , or worker.ign . Only HTTP and HTTPS protocols are supported. For coreos.live.rootfs_url= , specify the matching rootfs artifact for the kernel and initramfs you are booting. Only HTTP and HTTPS protocols are supported. For installations on DASD-type disks, complete the following tasks: For coreos.inst.install_dev= , specify /dev/dasda . Use rd.dasd= to specify the DASD where RHCOS is to be installed. Leave all other parameters unchanged. Example parameter file, bootstrap-0.parm , for the bootstrap machine: rd.neednet=1 \ console=ttysclp0 \ coreos.inst.install_dev=/dev/dasda \ coreos.live.rootfs_url=http://cl1.provide.example.com:8080/assets/rhcos-live-rootfs.s390x.img \ coreos.inst.ignition_url=http://cl1.provide.example.com:8080/ignition/bootstrap.ign \ ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 \ rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 \ zfcp.allow_lun_scan=0 \ rd.dasd=0.0.3490 Write all options in the parameter file as a single line and make sure you have no newline characters. For installations on FCP-type disks, complete the following tasks: Use rd.zfcp=<adapter>,<wwpn>,<lun> to specify the FCP disk where RHCOS is to be installed. For multipathing repeat this step for each additional path. Note When you install with multiple paths, you must enable multipathing directly after the installation, not at a later point in time, as this can cause problems. Set the install device as: coreos.inst.install_dev=/dev/disk/by-id/scsi-<serial_number> . Note If additional LUNs are configured with NPIV, FCP requires zfcp.allow_lun_scan=0 . If you must enable zfcp.allow_lun_scan=1 because you use a CSI driver, for example, you must configure your NPIV so that each node cannot access the boot partition of another node. Leave all other parameters unchanged. Important Additional postinstallation steps are required to fully enable multipathing. For more information, see "Enabling multipathing with kernel arguments on RHCOS" in Postinstallation machine configuration tasks . The following is an example parameter file worker-1.parm for a worker node with multipathing: rd.neednet=1 \ console=ttysclp0 \ coreos.inst.install_dev=/dev/disk/by-id/scsi-<serial_number> \ coreos.live.rootfs_url=http://cl1.provide.example.com:8080/assets/rhcos-live-rootfs.s390x.img \ coreos.inst.ignition_url=http://cl1.provide.example.com:8080/ignition/worker.ign \ ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 \ rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 \ zfcp.allow_lun_scan=0 \ rd.zfcp=0.0.1987,0x50050763070bc5e3,0x4008400B00000000 \ rd.zfcp=0.0.19C7,0x50050763070bc5e3,0x4008400B00000000 \ rd.zfcp=0.0.1987,0x50050763071bc5e3,0x4008400B00000000 \ rd.zfcp=0.0.19C7,0x50050763071bc5e3,0x4008400B00000000 Write all options in the parameter file as a single line and make sure you have no newline characters. Transfer the initramfs, kernel, parameter files, and RHCOS images to z/VM, for example with FTP. For details about how to transfer the files with FTP and boot from the virtual reader, see Installing under Z/VM . Punch the files to the virtual reader of the z/VM guest virtual machine that is to become your bootstrap node. See PUNCH in IBM Documentation. Tip You can use the CP PUNCH command or, if you use Linux, the vmur command to transfer files between two z/VM guest virtual machines. Log in to CMS on the bootstrap machine. IPL the bootstrap machine from the reader: See IPL in IBM Documentation. Repeat this procedure for the other machines in the cluster. 2.13.1. Advanced RHCOS installation reference This section illustrates the networking configuration and other advanced options that allow you to modify the Red Hat Enterprise Linux CoreOS (RHCOS) manual installation process. The following tables describe the kernel arguments and command-line options you can use with the RHCOS live installer and the coreos-installer command. 2.13.1.1. Networking and bonding options for ISO installations If you install RHCOS from an ISO image, you can add kernel arguments manually when you boot the image to configure networking for a node. If no networking arguments are specified, DHCP is activated in the initramfs when RHCOS detects that networking is required to fetch the Ignition config file. Important When adding networking arguments manually, you must also add the rd.neednet=1 kernel argument to bring the network up in the initramfs. The following information provides examples for configuring networking and bonding on your RHCOS nodes for ISO installations. The examples describe how to use the ip= , nameserver= , and bond= kernel arguments. Note Ordering is important when adding the kernel arguments: ip= , nameserver= , and then bond= . The networking options are passed to the dracut tool during system boot. For more information about the networking options supported by dracut , see the dracut.cmdline manual page . The following examples are the networking options for ISO installation. Configuring DHCP or static IP addresses To configure an IP address, either use DHCP ( ip=dhcp ) or set an individual static IP address ( ip=<host_ip> ). If setting a static IP, you must then identify the DNS server IP address ( nameserver=<dns_ip> ) on each node. The following example sets: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The hostname to core0.example.com The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41 Note When you use DHCP to configure IP addressing for the RHCOS machines, the machines also obtain the DNS server information through DHCP. For DHCP-based deployments, you can define the DNS server address that is used by the RHCOS nodes through your DHCP server configuration. Configuring an IP address without a static hostname You can configure an IP address without assigning a static hostname. If a static hostname is not set by the user, it will be picked up and automatically set by a reverse DNS lookup. To configure an IP address without a static hostname refer to the following example: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41 Specifying multiple network interfaces You can specify multiple network interfaces by setting multiple ip= entries. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring default gateway and route Optional: You can configure routes to additional networks by setting an rd.route= value. Note When you configure one or multiple networks, one default gateway is required. If the additional network gateway is different from the primary network gateway, the default gateway must be the primary network gateway. Run the following command to configure the default gateway: ip=::10.10.10.254:::: Enter the following command to configure the route for the additional network: rd.route=20.20.20.0/24:20.20.20.254:enp2s0 Disabling DHCP on a single interface You can disable DHCP on a single interface, such as when there are two or more network interfaces and only one interface is being used. In the example, the enp1s0 interface has a static networking configuration and DHCP is disabled for enp2s0 , which is not used: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none Combining DHCP and static IP configurations You can combine DHCP and static IP configurations on systems with multiple network interfaces, for example: ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring VLANs on individual interfaces Optional: You can configure VLANs on individual interfaces by using the vlan= parameter. To configure a VLAN on a network interface and use a static IP address, run the following command: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0 To configure a VLAN on a network interface and to use DHCP, run the following command: ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0 Providing multiple DNS servers You can provide multiple DNS servers by adding a nameserver= entry for each server, for example: nameserver=1.1.1.1 nameserver=8.8.8.8 Bonding multiple network interfaces to a single interface Optional: You can bond multiple network interfaces to a single interface by using the bond= option. Refer to the following examples: The syntax for configuring a bonded interface is: bond=<name>[:<network_interfaces>][:options] <name> is the bonding device name ( bond0 ), <network_interfaces> represents a comma-separated list of physical (ethernet) interfaces ( em1,em2 ), and options is a comma-separated list of bonding options. Enter modinfo bonding to see available options. When you create a bonded interface using bond= , you must specify how the IP address is assigned and other information for the bonded interface. To configure the bonded interface to use DHCP, set the bond's IP address to dhcp . For example: bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp To configure the bonded interface to use a static IP address, enter the specific IP address you want and related information. For example: bond=bond0:em1,em2:mode=active-backup,fail_over_mac=1 ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none Always set the fail_over_mac=1 option in active-backup mode, to avoid problems when shared OSA/RoCE cards are used. Bonding multiple network interfaces to a single interface Optional: You can configure VLANs on bonded interfaces by using the vlan= parameter and to use DHCP, for example: ip=bond0.100:dhcp bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0 Use the following example to configure the bonded interface with a VLAN and to use a static IP address: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0.100:none bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0 Using network teaming Optional: You can use a network teaming as an alternative to bonding by using the team= parameter: The syntax for configuring a team interface is: team=name[:network_interfaces] name is the team device name ( team0 ) and network_interfaces represents a comma-separated list of physical (ethernet) interfaces ( em1, em2 ). Note Teaming is planned to be deprecated when RHCOS switches to an upcoming version of RHEL. For more information, see this Red Hat Knowledgebase Article . Use the following example to configure a network team: team=team0:em1,em2 ip=team0:dhcp 2.14. Waiting for the bootstrap process to complete The OpenShift Container Platform bootstrap process begins after the cluster nodes first boot into the persistent RHCOS environment that has been installed to disk. The configuration information provided through the Ignition config files is used to initialize the bootstrap process and install OpenShift Container Platform on the machines. You must wait for the bootstrap process to complete. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have obtained the installation program and generated the Ignition config files for your cluster. You installed RHCOS on your cluster machines and provided the Ignition config files that the OpenShift Container Platform installation program generated. Your machines have direct internet access or have an HTTP or HTTPS proxy available. Procedure Monitor the bootstrap process: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... INFO API v1.27.3 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. After the bootstrap process is complete, remove the bootstrap machine from the load balancer. Important You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the bootstrap machine itself. 2.15. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 2.16. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.27.3 master-1 Ready master 63m v1.27.3 master-2 Ready master 64m v1.27.3 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-mddf5 20m system:node:master-01.example.com Approved,Issued csr-z5rln 16m system:node:worker-21.example.com Approved,Issued If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.27.3 master-1 Ready master 73m v1.27.3 master-2 Ready master 74m v1.27.3 worker-0 Ready worker 11m v1.27.3 worker-1 Ready worker 11m v1.27.3 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 2.17. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.14.0 True False False 19m baremetal 4.14.0 True False False 37m cloud-credential 4.14.0 True False False 40m cluster-autoscaler 4.14.0 True False False 37m config-operator 4.14.0 True False False 38m console 4.14.0 True False False 26m csi-snapshot-controller 4.14.0 True False False 37m dns 4.14.0 True False False 37m etcd 4.14.0 True False False 36m image-registry 4.14.0 True False False 31m ingress 4.14.0 True False False 30m insights 4.14.0 True False False 31m kube-apiserver 4.14.0 True False False 26m kube-controller-manager 4.14.0 True False False 36m kube-scheduler 4.14.0 True False False 36m kube-storage-version-migrator 4.14.0 True False False 37m machine-api 4.14.0 True False False 29m machine-approver 4.14.0 True False False 37m machine-config 4.14.0 True False False 36m marketplace 4.14.0 True False False 37m monitoring 4.14.0 True False False 29m network 4.14.0 True False False 38m node-tuning 4.14.0 True False False 37m openshift-apiserver 4.14.0 True False False 32m openshift-controller-manager 4.14.0 True False False 30m openshift-samples 4.14.0 True False False 32m operator-lifecycle-manager 4.14.0 True False False 37m operator-lifecycle-manager-catalog 4.14.0 True False False 37m operator-lifecycle-manager-packageserver 4.14.0 True False False 32m service-ca 4.14.0 True False False 38m storage 4.14.0 True False False 37m Configure the Operators that are not available. 2.17.1. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 2.17.1.1. Configuring registry storage for IBM Z As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have a cluster on IBM Z(R). You have provisioned persistent storage for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have 100Gi capacity. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resources found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.14 True False False 6h50m Ensure that your registry is set to managed to enable building and pushing of images. Run: Then, change the line to 2.17.1.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 2.18. Completing installation on user-provisioned infrastructure After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide. Prerequisites Your control plane has initialized. You have completed the initial Operator configuration. Procedure Confirm that all the cluster components are online with the following command: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.14.0 True False False 19m baremetal 4.14.0 True False False 37m cloud-credential 4.14.0 True False False 40m cluster-autoscaler 4.14.0 True False False 37m config-operator 4.14.0 True False False 38m console 4.14.0 True False False 26m csi-snapshot-controller 4.14.0 True False False 37m dns 4.14.0 True False False 37m etcd 4.14.0 True False False 36m image-registry 4.14.0 True False False 31m ingress 4.14.0 True False False 30m insights 4.14.0 True False False 31m kube-apiserver 4.14.0 True False False 26m kube-controller-manager 4.14.0 True False False 36m kube-scheduler 4.14.0 True False False 36m kube-storage-version-migrator 4.14.0 True False False 37m machine-api 4.14.0 True False False 29m machine-approver 4.14.0 True False False 37m machine-config 4.14.0 True False False 36m marketplace 4.14.0 True False False 37m monitoring 4.14.0 True False False 29m network 4.14.0 True False False 38m node-tuning 4.14.0 True False False 37m openshift-apiserver 4.14.0 True False False 32m openshift-controller-manager 4.14.0 True False False 30m openshift-samples 4.14.0 True False False 32m operator-lifecycle-manager 4.14.0 True False False 37m operator-lifecycle-manager-catalog 4.14.0 True False False 37m operator-lifecycle-manager-packageserver 4.14.0 True False False 32m service-ca 4.14.0 True False False 38m storage 4.14.0 True False False 37m Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 30m0s for the cluster to initialize... The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Confirm that the Kubernetes API server is communicating with the pods. To view a list of all pods, use the following command: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ... View the logs for a pod that is listed in the output of the command by using the following command: USD oc logs <pod_name> -n <namespace> 1 1 Specify the pod name and namespace, as shown in the output of the command. If the pod logs display, the Kubernetes API server can communicate with the cluster machines. For an installation with Fibre Channel Protocol (FCP), additional steps are required to enable multipathing. Do not enable multipathing during installation. See "Enabling multipathing with kernel arguments on RHCOS" in the Postinstallation machine configuration tasks documentation for more information. 2.19. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.14, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service How to generate SOSREPORT within OpenShift4 nodes without SSH . 2.20. steps Enabling multipathing with kernel arguments on RHCOS . Customize your cluster . If necessary, you can opt out of remote health reporting .
[ "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF", "global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s", "dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1", "api.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>", "api-int.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>", "random.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>", "console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>", "bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96", "dig +noall +answer @<nameserver_ip> -x 192.168.1.5", "5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2", "dig +noall +answer @<nameserver_ip> -x 192.168.1.96", "96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "mkdir <installation_directory>", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "compute: - name: worker platform: {} replicas: 0", "spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23", "spec: serviceNetwork: - 172.30.0.0/14", "defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789", "defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {}", "kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s", "./openshift-install create manifests --dir <installation_directory> 1", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "variant: openshift version: 4.14.0 metadata: name: master-storage labels: machineconfiguration.openshift.io/role: master storage: luks: - clevis: tang: - thumbprint: QcPr_NHFJammnRCA3fFMVdNBwjs url: http://clevis.example.com:7500 options: 1 - --cipher - aes-cbc-essiv:sha256 device: /dev/disk/by-partlabel/root 2 label: luks-root name: root wipe_volume: true filesystems: - device: /dev/mapper/root format: xfs label: root wipe_filesystem: true openshift: fips: true 3", "coreos-installer pxe customize /root/rhcos-bootfiles/rhcos-<release>-live-initramfs.s390x.img --dest-device /dev/disk/by-id/scsi-<serial-number> --dest-karg-append ip=<ip-address>::<gateway-ip>:<subnet-mask>::<network-device>:none --dest-karg-append nameserver=<nameserver-ip> --dest-karg-append rd.neednet=1 -o /root/rhcos-bootfiles/<Node-name>-initramfs.s390x.img", "rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/dasda \\ 1 ignition.firstboot ignition.platform.id=metal coreos.live.rootfs_url=http://10.19.17.25/redhat/ocp/rhcos-413.86.202302201445-0/rhcos-413.86.202302201445-0-live-rootfs.s390x.img coreos.inst.ignition_url=http://bastion.ocp-cluster1.example.com:8080/ignition/master.ign ip=10.19.17.2::10.19.17.1:255.255.255.0::enbdd0:none nameserver=10.19.17.1 zfcp.allow_lun_scan=0 \\ 2 rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 rd.zfcp=0.0.5677,0x600606680g7f0056,0x034F000000000000 \\ 3 zfcp.allow_lun_scan=0 rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 rd.zfcp=0.0.5677,0x600606680g7f0056,0x034F000000000000", "rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/dasda coreos.live.rootfs_url=http://cl1.provide.example.com:8080/assets/rhcos-live-rootfs.s390x.img coreos.inst.ignition_url=http://cl1.provide.example.com:8080/ignition/bootstrap.ign ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 zfcp.allow_lun_scan=0 rd.dasd=0.0.3490", "rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/disk/by-id/scsi-<serial_number> coreos.live.rootfs_url=http://cl1.provide.example.com:8080/assets/rhcos-live-rootfs.s390x.img coreos.inst.ignition_url=http://cl1.provide.example.com:8080/ignition/worker.ign ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 zfcp.allow_lun_scan=0 rd.zfcp=0.0.1987,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.1987,0x50050763071bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763071bc5e3,0x4008400B00000000", "ipl c", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=::10.10.10.254::::", "rd.route=20.20.20.0/24:20.20.20.254:enp2s0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none", "ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0", "ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0", "nameserver=1.1.1.1 nameserver=8.8.8.8", "bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp", "bond=bond0:em1,em2:mode=active-backup,fail_over_mac=1 ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none", "ip=bond0.100:dhcp bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0.100:none bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0", "team=team0:em1,em2 ip=team0:dhcp", "./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2", "INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.27.3 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.27.3 master-1 Ready master 63m v1.27.3 master-2 Ready master 64m v1.27.3", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-mddf5 20m system:node:master-01.example.com Approved,Issued csr-z5rln 16m system:node:worker-21.example.com Approved,Issued", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.27.3 master-1 Ready master 73m v1.27.3 master-2 Ready master 74m v1.27.3 worker-0 Ready worker 11m v1.27.3 worker-1 Ready worker 11m v1.27.3", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.14.0 True False False 19m baremetal 4.14.0 True False False 37m cloud-credential 4.14.0 True False False 40m cluster-autoscaler 4.14.0 True False False 37m config-operator 4.14.0 True False False 38m console 4.14.0 True False False 26m csi-snapshot-controller 4.14.0 True False False 37m dns 4.14.0 True False False 37m etcd 4.14.0 True False False 36m image-registry 4.14.0 True False False 31m ingress 4.14.0 True False False 30m insights 4.14.0 True False False 31m kube-apiserver 4.14.0 True False False 26m kube-controller-manager 4.14.0 True False False 36m kube-scheduler 4.14.0 True False False 36m kube-storage-version-migrator 4.14.0 True False False 37m machine-api 4.14.0 True False False 29m machine-approver 4.14.0 True False False 37m machine-config 4.14.0 True False False 36m marketplace 4.14.0 True False False 37m monitoring 4.14.0 True False False 29m network 4.14.0 True False False 38m node-tuning 4.14.0 True False False 37m openshift-apiserver 4.14.0 True False False 32m openshift-controller-manager 4.14.0 True False False 30m openshift-samples 4.14.0 True False False 32m operator-lifecycle-manager 4.14.0 True False False 37m operator-lifecycle-manager-catalog 4.14.0 True False False 37m operator-lifecycle-manager-packageserver 4.14.0 True False False 32m service-ca 4.14.0 True False False 38m storage 4.14.0 True False False 37m", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resources found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim:", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.14 True False False 6h50m", "oc edit configs.imageregistry/cluster", "managementState: Removed", "managementState: Managed", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.14.0 True False False 19m baremetal 4.14.0 True False False 37m cloud-credential 4.14.0 True False False 40m cluster-autoscaler 4.14.0 True False False 37m config-operator 4.14.0 True False False 38m console 4.14.0 True False False 26m csi-snapshot-controller 4.14.0 True False False 37m dns 4.14.0 True False False 37m etcd 4.14.0 True False False 36m image-registry 4.14.0 True False False 31m ingress 4.14.0 True False False 30m insights 4.14.0 True False False 31m kube-apiserver 4.14.0 True False False 26m kube-controller-manager 4.14.0 True False False 36m kube-scheduler 4.14.0 True False False 36m kube-storage-version-migrator 4.14.0 True False False 37m machine-api 4.14.0 True False False 29m machine-approver 4.14.0 True False False 37m machine-config 4.14.0 True False False 36m marketplace 4.14.0 True False False 37m monitoring 4.14.0 True False False 29m network 4.14.0 True False False 38m node-tuning 4.14.0 True False False 37m openshift-apiserver 4.14.0 True False False 32m openshift-controller-manager 4.14.0 True False False 30m openshift-samples 4.14.0 True False False 32m operator-lifecycle-manager 4.14.0 True False False 37m operator-lifecycle-manager-catalog 4.14.0 True False False 37m operator-lifecycle-manager-packageserver 4.14.0 True False False 32m service-ca 4.14.0 True False False 38m storage 4.14.0 True False False 37m", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m", "oc logs <pod_name> -n <namespace> 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_on_ibm_z_and_ibm_linuxone/installing-ibm-z
Chapter 13. Configuring the overcloud with Ansible
Chapter 13. Configuring the overcloud with Ansible Ansible is the main method to apply the overcloud configuration. This chapter provides information about how to interact with the overcloud Ansible configuration. Although director generates the Ansible playbooks automatically, it is a good idea to familiarize yourself with Ansible syntax. For more information about using Ansible, see https://docs.ansible.com/ . Note Ansible also uses the concept of roles, which are different to OpenStack Platform director roles. Ansible roles form reusable components of playbooks, whereas director roles contain mappings of OpenStack services to node types. 13.1. Ansible-based overcloud configuration (config-download) The config-download feature is the method that director uses to configure the overcloud. Director uses config-download in conjunction with OpenStack Orchestration (heat) and OpenStack Workflow Service (mistral) to generate the software configuration and apply the configuration to each overcloud node. Although heat creates all deployment data from SoftwareDeployment resources to perform the overcloud installation and configuration, heat does not apply any of the configuration. Heat only provides the configuration data through the heat API. When director creates the stack, a mistral workflow queries the heat API to obtain the configuration data, generate a set of Ansible playbooks, and applies the Ansible playbooks to the overcloud. As a result, when you run the openstack overcloud deploy command, the following process occurs: Director creates a new deployment plan based on openstack-tripleo-heat-templates and includes any environment files and parameters to customize the plan. Director uses heat to interpret the deployment plan and create the overcloud stack and all descendant resources. This includes provisioning nodes with the OpenStack Bare Metal service (ironic). Heat also creates the software configuration from the deployment plan. Director compiles the Ansible playbooks from this software configuration. Director generates a temporary user ( tripleo-admin ) on the overcloud nodes specifically for Ansible SSH access. Director downloads the heat software configuration and generates a set of Ansible playbooks using heat outputs. Director applies the Ansible playbooks to the overcloud nodes using ansible-playbook . 13.2. config-download working directory Director generates a set of Ansible playbooks for the config-download process. These playbooks are stored in a working directory in the /var/lib/mistral/ . This directory is named after the name of the overcloud, which is overcloud by default. The working directory contains a set of sub-directories named after each overcloud role. These sub-directories contain all tasks relevant to the configuration of the nodes in the overcloud role. These sub-directories also contain additional sub-directories named after each specific node. These sub-directories contain node-specific variables to apply to the overcloud role tasks. As a result, the overcloud roles within the working directory use the following structure: Each working directory is a local Git repository that records changes after each deployment operation. Use the local Git repositories to track configuration changes between each deployment. 13.3. Enabling access to config-download working directories The mistral user in the OpenStack Workflow service (mistral) containers own all files in the /var/lib/mistral/ working directories. You can grant the stack user on the undercloud access to all files in this directory. This helps with performing certain operations within the directory. Procedure Use the setfacl command to grant the stack user on the undercloud access to the files in the /var/lib/mistral directory: This command retains mistral user access to the directory. 13.4. Checking config-download log During the config-download process, Ansible creates a log file on the undercloud in the config-download working directory. Procedure View the log with the less command within the config-download working directory. The following example uses the overcloud working directory: 13.5. Separating the provisioning and configuration processes The openstack overcloud deploy command runs the heat-based provisioning process and then the config-download configuration process. You can also run the command to execute each process individually. Procedure Source the stackrc file: Run the deployment command with the --stack-only option. Include any environment files required for your overcloud: Wait until the provisioning process completes. Enable SSH access from the undercloud to the overcloud for the tripleo-admin user. The config-download process uses the tripleo-admin user to perform the Ansible-based configuration: Run the deployment command with the --config-download-only option. Include any environment files required for your overcloud: Wait until the configuration process completes. 13.6. Running config-download manually The working directory in /var/lib/mistral/overcloud contains the playbooks and scripts necessary to interact with ansible-playbook directly. This procedure shows how to interact with these files. Procedure Change to the directory of the Ansible playbook:: Run the ansible-playbook-command.sh command to reproduce the deployment: You can pass additional Ansible arguments to this script, which are then passed unchanged to the ansible-playbook command. This means that you can use other Ansible features, such as check mode ( --check ), limiting hosts ( --limit ), or overriding variables ( -e ). For example: The working directory contains a playbook called deploy_steps_playbook.yaml , which runs the overcloud configuration. To view this playbook, run the following command: The playbook uses various task files contained in the working directory. Some task files are common to all OpenStack Platform roles and some are specific to certain OpenStack Platform roles and servers. The working directory also contains sub-directories that correspond to each role that you define in your overcloud roles_data file. For example: Each OpenStack Platform role directory also contains sub-directories for individual servers of that role type. The directories use the composable role hostname format: The Ansible tasks are tagged. To see the full list of tags, use the CLI argument --list-tags for ansible-playbook : Then apply tagged configuration using the --tags , --skip-tags , or --start-at-task with the ansible-playbook-command.sh script: When config-download configures Ceph, Ansible executes ceph-ansible from within the config-download external_deploy_steps_tasks playbook. When you run config-download manually, the second Ansible execution does not inherit the ssh_args argument. To pass Ansible environment variables to this execution, use a heat environment file. For example: Warning When you use ansible-playbook CLI arguments such as --tags , --skip-tags , or --start-at-task , do not run or apply deployment configuration out of order. These CLI arguments are a convenient way to rerun previously failed tasks or to iterate over an initial deployment. However, to guarantee a consistent deployment, you must run all tasks from deploy_steps_playbook.yaml in order. 13.7. Performing Git operations on the working directory The config-download working directory is a local Git repository. Every time a deployment operation runs, director adds a Git commit to the working directory with the relevant changes. You can perform Git operations to view configuration for the deployment at different stages and compare the configuration with different deployments. Be aware of the limitations of the working directory. For example, if you use Git to revert to a version of the config-download working directory, this action affects only the configuration in the working directory. It does not affect the following configurations: The overcloud data schema: Applying a version of the working directory software configuration does not undo data migration and schema changes. The hardware layout of the overcloud: Reverting to software configuration does not undo changes related to overcloud hardware, such as scaling up or down. The heat stack: Reverting to earlier revisions of the working directory has no effect on the configuration stored in the heat stack. The heat stack creates a new version of the software configuration that applies to the overcloud. To make permanent changes to the overcloud, modify the environment files applied to the overcloud stack before you rerun the openstack overcloud deploy command. Complete the following steps to compare different commits of the config-download working directory. Procedure Change to the config-download working directory for your overcloud. In this example, the working directory is for the overcloud named overcloud : Run the git log command to list the commits in your working directory. You can also format the log output to show the date: By default, the most recent commit appears first. Run the git diff command against two commit hashes to see all changes between the deployments: 13.8. Creating config-download files manually You can generate your own config-download files outside of the standard workflow. For example, you can generate the overcloud heat stack using the --stack-only option with the openstack overcloud deploy command so that you can apply the configuration separately. Complete the following steps to create your own config-download files manually. Procedure Generate the config-download files: --name is the name of the overcloud that you want to use for the Ansible file export. --config-dir is the location where you want to save the config-download files. Change to the directory that contains your config-download files: Generate a static inventory file: Use the config-download files and the static inventory file to perform a configuration. To execute the deployment playbook, run the ansible-playbook command: To generate an overcloudrc file manually from this configuration, run the following command: 13.9. config-download top level files The following file are important top level files within a config-download working directory. Ansible configuration and execution The following files are specific to configuring and executing Ansible within the config-download working directory. ansible.cfg Configuration file used when running ansible-playbook . ansible.log Log file from the last run of ansible-playbook . ansible-errors.json JSON structured file that contains any deployment errors. ansible-playbook-command.sh Executable script to rerun the ansible-playbook command from the last deployment operation. ssh_private_key Private SSH key that Ansible uses to access the overcloud nodes. tripleo-ansible-inventory.yaml Ansible inventory file that contains hosts and variables for all the overcloud nodes. overcloud-config.tar.gz Archive of the working directory. Playbooks The following files are playbooks within the config-download working directory. deploy_steps_playbook.yaml Main deployment steps. This playbook performs the main configuration operations for your overcloud. pre_upgrade_rolling_steps_playbook.yaml Pre upgrade steps for major upgrade upgrade_steps_playbook.yaml Major upgrade steps. post_upgrade_steps_playbook.yaml Post upgrade steps for major upgrade. update_steps_playbook.yaml Minor update steps. fast_forward_upgrade_playbook.yaml Fast forward upgrade tasks. Use this playbook only when you want to upgrade from one long-life version of Red Hat OpenStack Platform to the . 13.10. config-download tags The playbooks use tagged tasks to control the tasks that they apply to the overcloud. Use tags with the ansible-playbook CLI arguments --tags or --skip-tags to control which tasks to execute. The following list contains information about the tags that are enabled by default: facts Fact gathering operations. common_roles Ansible roles common to all nodes. overcloud All plays for overcloud deployment. pre_deploy_steps Deployments that happen before the deploy_steps operations. host_prep_steps Host preparation steps. deploy_steps Deployment steps. post_deploy_steps Steps that happen after the deploy_steps operations. external All external deployment tasks. external_deploy_steps External deployment tasks that run on the undercloud only. 13.11. config-download deployment steps The deploy_steps_playbook.yaml playbook configures the overcloud. This playbook applies all software configuration that is necessary to deploy a full overcloud based on the overcloud deployment plan. This section contains a summary of the different Ansible plays used within this playbook. The play names in this section are the same names that are used within the playbook and that are displayed in the ansible-playbook output. This section also contains information about the Ansible tags that are set on each play. Gather facts from undercloud Fact gathering for the undercloud node. Tags: facts Gather facts from overcloud Fact gathering for the overcloud nodes. Tags: facts Load global variables Loads all variables from global_vars.yaml . Tags: always Common roles for TripleO servers Applies common Ansible roles to all overcloud nodes, including tripleo-bootstrap for installing bootstrap packages, and tripleo-ssh-known-hosts for configuring ssh known hosts. Tags: common_roles Overcloud deploy step tasks for step 0 Applies tasks from the deploy_steps_tasks template interface. Tags: overcloud , deploy_steps Server deployments Applies server-specific heat deployments for configuration such as networking and hieradata. Includes NetworkDeployment, <Role>Deployment, <Role>AllNodesDeployment, etc. Tags: overcloud , pre_deploy_steps Host prep steps Applies tasks from the host_prep_steps template interface. Tags: overcloud , host_prep_steps External deployment step [1,2,3,4,5] Applies tasks from the external_deploy_steps_tasks template interface. Ansible runs these tasks only against the undercloud node. Tags: external , external_deploy_steps Overcloud deploy step tasks for [1,2,3,4,5] Applies tasks from the deploy_steps_tasks template interface. Tags: overcloud , deploy_steps Overcloud common deploy step tasks [1,2,3,4,5] Applies the common tasks performed at each step, including puppet host configuration, container-puppet.py , and paunch (container configuration). Tags: overcloud , deploy_steps Server Post Deployments Applies server specific heat deployments for configuration performed after the 5-step deployment process. Tags: overcloud , post_deploy_steps External deployment Post Deploy tasks Applies tasks from the external_post_deploy_steps_tasks template interface. Ansible runs these tasks only against the undercloud node. Tags: external , external_deploy_steps 13.12. Steps You can now continue your regular overcloud operations.
[ "─ /var/lib/mistral/overcloud | ├── Controller │ ├── overcloud-controller-0 | ├── overcloud-controller-1 │ └── overcloud-controller-2 ├── Compute │ ├── overcloud-compute-0 | ├── overcloud-compute-1 │ └── overcloud-compute-2", "sudo setfacl -R -m u:stack:rwx /var/lib/mistral", "less /var/lib/mistral/overcloud/ansible.log", "source ~/stackrc", "openstack overcloud deploy --templates -e environment-file1.yaml -e environment-file2.yaml --stack-only", "openstack overcloud admin authorize", "openstack overcloud deploy --templates -e environment-file1.yaml -e environment-file2.yaml --config-download-only", "cd /var/lib/mistral/overcloud/", "./ansible-playbook-command.sh", "./ansible-playbook-command.sh --limit Controller", "less deploy_steps_playbook.yaml", "ls Controller/", "ls Controller/overcloud-controller-0", "ansible-playbook -i tripleo-ansible-inventory.yaml --list-tags deploy_steps_playbook.yaml", "./ansible-playbook-command.sh --tags overcloud", "parameter_defaults: CephAnsibleEnvironmentVariables: ANSIBLE_HOST_KEY_CHECKING: 'False' ANSIBLE_PRIVATE_KEY_FILE: '/home/stack/.ssh/id_rsa'", "cd /var/lib/mistral/overcloud", "git log --format=format:\"%h%x09%cd%x09\" a7e9063 Mon Oct 8 21:17:52 2018 +1000 dfb9d12 Fri Oct 5 20:23:44 2018 +1000 d0a910b Wed Oct 3 19:30:16 2018 +1000", "git diff a7e9063 dfb9d12", "openstack overcloud config download --name overcloud --config-dir ~/config-download", "cd ~/config-download", "tripleo-ansible-inventory --ansible_ssh_user heat-admin --static-yaml-inventory inventory.yaml", "ansible-playbook -i inventory.yaml --private-key ~/.ssh/id_rsa --become ~/config-download/deploy_steps_playbook.yaml", "openstack action execution run --save-result --run-sync tripleo.deployment.overcloudrc '{\"container\":\"overcloud\"}' | jq -r '.[\"result\"][\"overcloudrc.v3\"]' > overcloudrc.v3" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/director_installation_and_usage/configuring-the-overcloud-with-ansible
17.2.2.4. Expansions
17.2.2.4. Expansions Expansions, when used in conjunction with the spawn and twist directives, provide information about the client, server, and processes involved. Below is a list of supported expansions: %a - Supplies the client's IP address. %A - Supplies the server's IP address. %c - Supplies a variety of client information, such as the username and hostname, or the username and IP address. %d - Supplies the daemon process name. %h - Supplies the client's hostname (or IP address, if the hostname is unavailable). %H - Supplies the server's hostname (or IP address, if the hostname is unavailable). %n - Supplies the client's hostname. If unavailable, unknown is printed. If the client's hostname and host address do not match, paranoid is printed. %N - Supplies the server's hostname. If unavailable, unknown is printed. If the server's hostname and host address do not match, paranoid is printed. %p - Supplies the daemon process ID. %s -Supplies various types of server information, such as the daemon process and the host or IP address of the server. %u - Supplies the client's username. If unavailable, unknown is printed. The following sample rule uses an expansion in conjunction with the spawn command to identify the client host in a customized log file. When connections to the SSH daemon ( sshd ) are attempted from a host in the example.com domain, execute the echo command to log the attempt, including the client hostname (by using the %h expansion), to a special file: Similarly, expansions can be used to personalize messages back to the client. In the following example, clients attempting to access FTP services from the example.com domain are informed that they have been banned from the server: For a full explanation of available expansions, as well as additional access control options, refer to section 5 of the man pages for hosts_access ( man 5 hosts_access ) and the man page for hosts_options . For additional information about TCP wrappers, refer to Section 17.5, "Additional Resources" . For more information about how to secure TCP wrappers, refer to the chapter titled Server Security in the Security Guide .
[ "sshd : .example.com : spawn /bin/echo `/bin/date` access denied to %h>>/var/log/sshd.log : deny", "vsftpd : .example.com : twist /bin/echo \"421 %h has been banned from this server!\"" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s3-tcpwrappers-access-rules-options-exp
Chapter 5. Probe [monitoring.coreos.com/v1]
Chapter 5. Probe [monitoring.coreos.com/v1] Description Probe defines monitoring for a set of static targets or ingresses. Type object Required spec 5.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object Specification of desired Ingress selection for target discovery by Prometheus. 5.1.1. .spec Description Specification of desired Ingress selection for target discovery by Prometheus. Type object Property Type Description authorization object Authorization section for this endpoint basicAuth object BasicAuth allow an endpoint to authenticate over basic authentication. More info: https://prometheus.io/docs/operating/configuration/#endpoint bearerTokenSecret object Secret to mount to read bearer token for scraping targets. The secret needs to be in the same namespace as the probe and accessible by the Prometheus Operator. interval string Interval at which targets are probed using the configured prober. If not specified Prometheus' global scrape interval is used. jobName string The job name assigned to scraped metrics by default. labelLimit integer Per-scrape limit on number of labels that will be accepted for a sample. Only valid in Prometheus versions 2.27.0 and newer. labelNameLengthLimit integer Per-scrape limit on length of labels name that will be accepted for a sample. Only valid in Prometheus versions 2.27.0 and newer. labelValueLengthLimit integer Per-scrape limit on length of labels value that will be accepted for a sample. Only valid in Prometheus versions 2.27.0 and newer. metricRelabelings array MetricRelabelConfigs to apply to samples before ingestion. metricRelabelings[] object RelabelConfig allows dynamic rewriting of the label set, being applied to samples before ingestion. It defines <metric_relabel_configs> -section of Prometheus configuration. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#metric_relabel_configs module string The module to use for probing specifying how to probe the target. Example module configuring in the blackbox exporter: https://github.com/prometheus/blackbox_exporter/blob/master/example.yml oauth2 object OAuth2 for the URL. Only valid in Prometheus versions 2.27.0 and newer. prober object Specification for the prober to use for probing targets. The prober.URL parameter is required. Targets cannot be probed if left empty. sampleLimit integer SampleLimit defines per-scrape limit on number of scraped samples that will be accepted. scrapeTimeout string Timeout for scraping metrics from the Prometheus exporter. If not specified, the Prometheus global scrape interval is used. targetLimit integer TargetLimit defines a limit on the number of scraped targets that will be accepted. targets object Targets defines a set of static or dynamically discovered targets to probe. tlsConfig object TLS configuration to use when scraping the endpoint. 5.1.2. .spec.authorization Description Authorization section for this endpoint Type object Property Type Description credentials object The secret's key that contains the credentials of the request type string Set the authentication type. Defaults to Bearer, Basic will cause an error 5.1.3. .spec.authorization.credentials Description The secret's key that contains the credentials of the request Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 5.1.4. .spec.basicAuth Description BasicAuth allow an endpoint to authenticate over basic authentication. More info: https://prometheus.io/docs/operating/configuration/#endpoint Type object Property Type Description password object The secret in the service monitor namespace that contains the password for authentication. username object The secret in the service monitor namespace that contains the username for authentication. 5.1.5. .spec.basicAuth.password Description The secret in the service monitor namespace that contains the password for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 5.1.6. .spec.basicAuth.username Description The secret in the service monitor namespace that contains the username for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 5.1.7. .spec.bearerTokenSecret Description Secret to mount to read bearer token for scraping targets. The secret needs to be in the same namespace as the probe and accessible by the Prometheus Operator. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 5.1.8. .spec.metricRelabelings Description MetricRelabelConfigs to apply to samples before ingestion. Type array 5.1.9. .spec.metricRelabelings[] Description RelabelConfig allows dynamic rewriting of the label set, being applied to samples before ingestion. It defines <metric_relabel_configs> -section of Prometheus configuration. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#metric_relabel_configs Type object Property Type Description action string Action to perform based on regex matching. Default is 'replace'. uppercase and lowercase actions require Prometheus >= 2.36. modulus integer Modulus to take of the hash of the source label values. regex string Regular expression against which the extracted value is matched. Default is '(.*)' replacement string Replacement value against which a regex replace is performed if the regular expression matches. Regex capture groups are available. Default is 'USD1' separator string Separator placed between concatenated source label values. default is ';'. sourceLabels array (string) The source labels select values from existing labels. Their content is concatenated using the configured separator and matched against the configured regular expression for the replace, keep, and drop actions. targetLabel string Label to which the resulting value is written in a replace action. It is mandatory for replace actions. Regex capture groups are available. 5.1.10. .spec.oauth2 Description OAuth2 for the URL. Only valid in Prometheus versions 2.27.0 and newer. Type object Required clientId clientSecret tokenUrl Property Type Description clientId object The secret or configmap containing the OAuth2 client id clientSecret object The secret containing the OAuth2 client secret endpointParams object (string) Parameters to append to the token URL scopes array (string) OAuth2 scopes used for the token request tokenUrl string The URL to fetch the token from 5.1.11. .spec.oauth2.clientId Description The secret or configmap containing the OAuth2 client id Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 5.1.12. .spec.oauth2.clientId.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 5.1.13. .spec.oauth2.clientId.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 5.1.14. .spec.oauth2.clientSecret Description The secret containing the OAuth2 client secret Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 5.1.15. .spec.prober Description Specification for the prober to use for probing targets. The prober.URL parameter is required. Targets cannot be probed if left empty. Type object Required url Property Type Description path string Path to collect metrics from. Defaults to /probe . proxyUrl string Optional ProxyURL. scheme string HTTP scheme to use for scraping. Defaults to http . url string Mandatory URL of the prober. 5.1.16. .spec.targets Description Targets defines a set of static or dynamically discovered targets to probe. Type object Property Type Description ingress object ingress defines the Ingress objects to probe and the relabeling configuration. If staticConfig is also defined, staticConfig takes precedence. staticConfig object staticConfig defines the static list of targets to probe and the relabeling configuration. If ingress is also defined, staticConfig takes precedence. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#static_config . 5.1.17. .spec.targets.ingress Description ingress defines the Ingress objects to probe and the relabeling configuration. If staticConfig is also defined, staticConfig takes precedence. Type object Property Type Description namespaceSelector object From which namespaces to select Ingress objects. relabelingConfigs array RelabelConfigs to apply to the label set of the target before it gets scraped. The original ingress address is available via the \tmp_prometheus_ingress_address label. It can be used to customize the probed URL. The original scrape job's name is available via the \__tmp_prometheus_job_name label. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config relabelingConfigs[] object RelabelConfig allows dynamic rewriting of the label set, being applied to samples before ingestion. It defines <metric_relabel_configs> -section of Prometheus configuration. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#metric_relabel_configs selector object Selector to select the Ingress objects. 5.1.18. .spec.targets.ingress.namespaceSelector Description From which namespaces to select Ingress objects. Type object Property Type Description any boolean Boolean describing whether all namespaces are selected in contrast to a list restricting them. matchNames array (string) List of namespace names to select from. 5.1.19. .spec.targets.ingress.relabelingConfigs Description RelabelConfigs to apply to the label set of the target before it gets scraped. The original ingress address is available via the __tmp_prometheus_ingress_address label. It can be used to customize the probed URL. The original scrape job's name is available via the \__tmp_prometheus_job_name label. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config Type array 5.1.20. .spec.targets.ingress.relabelingConfigs[] Description RelabelConfig allows dynamic rewriting of the label set, being applied to samples before ingestion. It defines <metric_relabel_configs> -section of Prometheus configuration. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#metric_relabel_configs Type object Property Type Description action string Action to perform based on regex matching. Default is 'replace'. uppercase and lowercase actions require Prometheus >= 2.36. modulus integer Modulus to take of the hash of the source label values. regex string Regular expression against which the extracted value is matched. Default is '(.*)' replacement string Replacement value against which a regex replace is performed if the regular expression matches. Regex capture groups are available. Default is 'USD1' separator string Separator placed between concatenated source label values. default is ';'. sourceLabels array (string) The source labels select values from existing labels. Their content is concatenated using the configured separator and matched against the configured regular expression for the replace, keep, and drop actions. targetLabel string Label to which the resulting value is written in a replace action. It is mandatory for replace actions. Regex capture groups are available. 5.1.21. .spec.targets.ingress.selector Description Selector to select the Ingress objects. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 5.1.22. .spec.targets.ingress.selector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 5.1.23. .spec.targets.ingress.selector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 5.1.24. .spec.targets.staticConfig Description staticConfig defines the static list of targets to probe and the relabeling configuration. If ingress is also defined, staticConfig takes precedence. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#static_config . Type object Property Type Description labels object (string) Labels assigned to all metrics scraped from the targets. relabelingConfigs array RelabelConfigs to apply to the label set of the targets before it gets scraped. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config relabelingConfigs[] object RelabelConfig allows dynamic rewriting of the label set, being applied to samples before ingestion. It defines <metric_relabel_configs> -section of Prometheus configuration. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#metric_relabel_configs static array (string) The list of hosts to probe. 5.1.25. .spec.targets.staticConfig.relabelingConfigs Description RelabelConfigs to apply to the label set of the targets before it gets scraped. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config Type array 5.1.26. .spec.targets.staticConfig.relabelingConfigs[] Description RelabelConfig allows dynamic rewriting of the label set, being applied to samples before ingestion. It defines <metric_relabel_configs> -section of Prometheus configuration. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#metric_relabel_configs Type object Property Type Description action string Action to perform based on regex matching. Default is 'replace'. uppercase and lowercase actions require Prometheus >= 2.36. modulus integer Modulus to take of the hash of the source label values. regex string Regular expression against which the extracted value is matched. Default is '(.*)' replacement string Replacement value against which a regex replace is performed if the regular expression matches. Regex capture groups are available. Default is 'USD1' separator string Separator placed between concatenated source label values. default is ';'. sourceLabels array (string) The source labels select values from existing labels. Their content is concatenated using the configured separator and matched against the configured regular expression for the replace, keep, and drop actions. targetLabel string Label to which the resulting value is written in a replace action. It is mandatory for replace actions. Regex capture groups are available. 5.1.27. .spec.tlsConfig Description TLS configuration to use when scraping the endpoint. Type object Property Type Description ca object Certificate authority used when verifying server certificates. cert object Client certificate to present when doing client-authentication. insecureSkipVerify boolean Disable target certificate validation. keySecret object Secret containing the client key file for the targets. serverName string Used to verify the hostname for the targets. 5.1.28. .spec.tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 5.1.29. .spec.tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 5.1.30. .spec.tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 5.1.31. .spec.tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 5.1.32. .spec.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 5.1.33. .spec.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 5.1.34. .spec.tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 5.2. API endpoints The following API endpoints are available: /apis/monitoring.coreos.com/v1/probes GET : list objects of kind Probe /apis/monitoring.coreos.com/v1/namespaces/{namespace}/probes DELETE : delete collection of Probe GET : list objects of kind Probe POST : create a Probe /apis/monitoring.coreos.com/v1/namespaces/{namespace}/probes/{name} DELETE : delete a Probe GET : read the specified Probe PATCH : partially update the specified Probe PUT : replace the specified Probe 5.2.1. /apis/monitoring.coreos.com/v1/probes Table 5.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list objects of kind Probe Table 5.2. HTTP responses HTTP code Reponse body 200 - OK ProbeList schema 401 - Unauthorized Empty 5.2.2. /apis/monitoring.coreos.com/v1/namespaces/{namespace}/probes Table 5.3. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 5.4. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of Probe Table 5.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 5.6. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Probe Table 5.7. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 5.8. HTTP responses HTTP code Reponse body 200 - OK ProbeList schema 401 - Unauthorized Empty HTTP method POST Description create a Probe Table 5.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.10. Body parameters Parameter Type Description body Probe schema Table 5.11. HTTP responses HTTP code Reponse body 200 - OK Probe schema 201 - Created Probe schema 202 - Accepted Probe schema 401 - Unauthorized Empty 5.2.3. /apis/monitoring.coreos.com/v1/namespaces/{namespace}/probes/{name} Table 5.12. Global path parameters Parameter Type Description name string name of the Probe namespace string object name and auth scope, such as for teams and projects Table 5.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a Probe Table 5.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 5.15. Body parameters Parameter Type Description body DeleteOptions schema Table 5.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Probe Table 5.17. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 5.18. HTTP responses HTTP code Reponse body 200 - OK Probe schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Probe Table 5.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.20. Body parameters Parameter Type Description body Patch schema Table 5.21. HTTP responses HTTP code Reponse body 200 - OK Probe schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Probe Table 5.22. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.23. Body parameters Parameter Type Description body Probe schema Table 5.24. HTTP responses HTTP code Reponse body 200 - OK Probe schema 201 - Created Probe schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/monitoring_apis/probe-monitoring-coreos-com-v1
Chapter 49. network
Chapter 49. network This chapter describes the commands under the network command. 49.1. network agent add network Add network to an agent Usage: Table 49.1. Positional arguments Value Summary <agent-id> Agent to which a network is added (id only) <network> Network to be added to an agent (name or id) Table 49.2. Command arguments Value Summary -h, --help Show this help message and exit --dhcp Add network to a dhcp agent 49.2. network agent add router Add router to an agent Usage: Table 49.3. Positional arguments Value Summary <agent-id> Agent to which a router is added (id only) <router> Router to be added to an agent (name or id) Table 49.4. Command arguments Value Summary -h, --help Show this help message and exit --l3 Add router to an l3 agent 49.3. network agent delete Delete network agent(s) Usage: Table 49.5. Positional arguments Value Summary <network-agent> Network agent(s) to delete (id only) Table 49.6. Command arguments Value Summary -h, --help Show this help message and exit 49.4. network agent list List network agents Usage: Table 49.7. Command arguments Value Summary -h, --help Show this help message and exit --agent-type <agent-type> List only agents with the specified agent type. the supported agent types are: bgp, dhcp, open-vswitch, linux-bridge, ofa, l3, loadbalancer, metering, metadata, macvtap, nic, baremetal. --host <host> List only agents running on the specified host --network <network> List agents hosting a network (name or id) --router <router> List agents hosting this router (name or id) --long List additional fields in output Table 49.8. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 49.9. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 49.10. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.11. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.5. network agent remove network Remove network from an agent. Usage: Table 49.12. Positional arguments Value Summary <agent-id> Agent to which a network is removed (id only) <network> Network to be removed from an agent (name or id) Table 49.13. Command arguments Value Summary -h, --help Show this help message and exit --dhcp Remove network from dhcp agent 49.6. network agent remove router Remove router from an agent Usage: Table 49.14. Positional arguments Value Summary <agent-id> Agent from which router will be removed (id only) <router> Router to be removed from an agent (name or id) Table 49.15. Command arguments Value Summary -h, --help Show this help message and exit --l3 Remove router from an l3 agent 49.7. network agent set Set network agent properties Usage: Table 49.16. Positional arguments Value Summary <network-agent> Network agent to modify (id only) Table 49.17. Command arguments Value Summary -h, --help Show this help message and exit --description <description> Set network agent description --enable Enable network agent --disable Disable network agent 49.8. network agent show Display network agent details Usage: Table 49.18. Positional arguments Value Summary <network-agent> Network agent to display (id only) Table 49.19. Command arguments Value Summary -h, --help Show this help message and exit Table 49.20. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 49.21. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.22. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 49.23. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.9. network auto allocated topology create Create the auto allocated topology for project Usage: Table 49.24. Command arguments Value Summary -h, --help Show this help message and exit --project <project> Return the auto allocated topology for a given project. Default is current project --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --check-resources Validate the requirements for auto allocated topology. Does not return a topology. --or-show If topology exists returns the topology's information (Default) Table 49.25. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 49.26. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.27. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 49.28. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.10. network auto allocated topology delete Delete auto allocated topology for project Usage: Table 49.29. Command arguments Value Summary -h, --help Show this help message and exit --project <project> Delete auto allocated topology for a given project. Default is the current project --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. 49.11. network create Create new network Usage: Table 49.30. Positional arguments Value Summary <name> New network name Table 49.31. Command arguments Value Summary -h, --help Show this help message and exit --extra-property type=<property_type>,name=<property_name>,value=<property_value> Additional parameters can be passed using this property. Default type of the extra property is string ( str ), but other types can be used as well. Available types are: dict , list , str , bool , int . In case of list type, value can be semicolon-separated list of values. For dict value is semicolon-separated list of the key:value pairs. --share Share the network between projects --no-share Do not share the network between projects --enable Enable network (default) --disable Disable network --project <project> Owner's project (name or id) --description <description> Set network description --mtu <mtu> Set network mtu --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --availability-zone-hint <availability-zone> Availability zone in which to create this network (Network Availability Zone extension required, repeat option to set multiple availability zones) --enable-port-security Enable port security by default for ports created on this network (default) --disable-port-security Disable port security by default for ports created on this network --external The network has an external routing facility that's not managed by Neutron and can be used as in: openstack router set --external-gateway NETWORK (external-net extension required) --internal Opposite of --external (default) --default Specify if this network should be used as the default external network --no-default Do not use the network as the default external network (default) --qos-policy <qos-policy> Qos policy to attach to this network (name or id) --transparent-vlan Make the network vlan transparent --no-transparent-vlan Do not make the network vlan transparent --provider-network-type <provider-network-type> The physical mechanism by which the virtual network is implemented. For example: flat, geneve, gre, local, vlan, vxlan. --provider-physical-network <provider-physical-network> Name of the physical network over which the virtual network is implemented --provider-segment <provider-segment> Vlan id for vlan networks or tunnel id for GENEVE/GRE/VXLAN networks --dns-domain <dns-domain> Set dns domain for this network (requires dns integration extension) --tag <tag> Tag to be added to the network (repeat option to set multiple tags) --no-tag No tags associated with the network Table 49.32. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 49.33. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.34. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 49.35. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.12. network delete Delete network(s) Usage: Table 49.36. Positional arguments Value Summary <network> Network(s) to delete (name or id) Table 49.37. Command arguments Value Summary -h, --help Show this help message and exit 49.13. network flavor add profile Add a service profile to a network flavor Usage: Table 49.38. Positional arguments Value Summary <flavor> Network flavor (name or id) <service-profile> Service profile (id only) Table 49.39. Command arguments Value Summary -h, --help Show this help message and exit 49.14. network flavor create Create new network flavor Usage: Table 49.40. Positional arguments Value Summary <name> Name for the flavor Table 49.41. Command arguments Value Summary -h, --help Show this help message and exit --extra-property type=<property_type>,name=<property_name>,value=<property_value> Additional parameters can be passed using this property. Default type of the extra property is string ( str ), but other types can be used as well. Available types are: dict , list , str , bool , int . In case of list type, value can be semicolon-separated list of values. For dict value is semicolon-separated list of the key:value pairs. --service-type <service-type> Service type to which the flavor applies to: e.g. vpn (See openstack network service provider list for loaded examples.) --description DESCRIPTION Description for the flavor --project <project> Owner's project (name or id) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --enable Enable the flavor (default) --disable Disable the flavor Table 49.42. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 49.43. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.44. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 49.45. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.15. network flavor delete Delete network flavors Usage: Table 49.46. Positional arguments Value Summary <flavor> Flavor(s) to delete (name or id) Table 49.47. Command arguments Value Summary -h, --help Show this help message and exit 49.16. network flavor list List network flavors Usage: Table 49.48. Command arguments Value Summary -h, --help Show this help message and exit Table 49.49. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 49.50. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 49.51. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.52. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.17. network flavor profile create Create new network flavor profile Usage: Table 49.53. Command arguments Value Summary -h, --help Show this help message and exit --extra-property type=<property_type>,name=<property_name>,value=<property_value> Additional parameters can be passed using this property. Default type of the extra property is string ( str ), but other types can be used as well. Available types are: dict , list , str , bool , int . In case of list type, value can be semicolon-separated list of values. For dict value is semicolon-separated list of the key:value pairs. --project <project> Owner's project (name or id) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --description <description> Description for the flavor profile --enable Enable the flavor profile --disable Disable the flavor profile --driver DRIVER Python module path to driver. this becomes required if --metainfo is missing and vice versa --metainfo METAINFO Metainfo for the flavor profile. this becomes required if --driver is missing and vice versa Table 49.54. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 49.55. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.56. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 49.57. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.18. network flavor profile delete Delete network flavor profile Usage: Table 49.58. Positional arguments Value Summary <flavor-profile> Flavor profile(s) to delete (id only) Table 49.59. Command arguments Value Summary -h, --help Show this help message and exit 49.19. network flavor profile list List network flavor profile(s) Usage: Table 49.60. Command arguments Value Summary -h, --help Show this help message and exit Table 49.61. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 49.62. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 49.63. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.64. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.20. network flavor profile set Set network flavor profile properties Usage: Table 49.65. Positional arguments Value Summary <flavor-profile> Flavor profile to update (id only) Table 49.66. Command arguments Value Summary -h, --help Show this help message and exit --extra-property type=<property_type>,name=<property_name>,value=<property_value> Additional parameters can be passed using this property. Default type of the extra property is string ( str ), but other types can be used as well. Available types are: dict , list , str , bool , int . In case of list type, value can be semicolon-separated list of values. For dict value is semicolon-separated list of the key:value pairs. --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --description <description> Description for the flavor profile --enable Enable the flavor profile --disable Disable the flavor profile --driver DRIVER Python module path to driver. this becomes required if --metainfo is missing and vice versa --metainfo METAINFO Metainfo for the flavor profile. this becomes required if --driver is missing and vice versa 49.21. network flavor profile show Display network flavor profile details Usage: Table 49.67. Positional arguments Value Summary <flavor-profile> Flavor profile to display (id only) Table 49.68. Command arguments Value Summary -h, --help Show this help message and exit Table 49.69. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 49.70. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.71. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 49.72. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.22. network flavor remove profile Remove service profile from network flavor Usage: Table 49.73. Positional arguments Value Summary <flavor> Network flavor (name or id) <service-profile> Service profile (id only) Table 49.74. Command arguments Value Summary -h, --help Show this help message and exit 49.23. network flavor set Set network flavor properties Usage: Table 49.75. Positional arguments Value Summary <flavor> Flavor to update (name or id) Table 49.76. Command arguments Value Summary -h, --help Show this help message and exit --extra-property type=<property_type>,name=<property_name>,value=<property_value> Additional parameters can be passed using this property. Default type of the extra property is string ( str ), but other types can be used as well. Available types are: dict , list , str , bool , int . In case of list type, value can be semicolon-separated list of values. For dict value is semicolon-separated list of the key:value pairs. --description DESCRIPTION Set network flavor description --disable Disable network flavor --enable Enable network flavor --name <name> Set flavor name 49.24. network flavor show Display network flavor details Usage: Table 49.77. Positional arguments Value Summary <flavor> Flavor to display (name or id) Table 49.78. Command arguments Value Summary -h, --help Show this help message and exit Table 49.79. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 49.80. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.81. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 49.82. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.25. network l3 conntrack helper create Create a new L3 conntrack helper Usage: Table 49.83. Positional arguments Value Summary <router> Router for which conntrack helper will be created Table 49.84. Command arguments Value Summary -h, --help Show this help message and exit --helper <helper> The netfilter conntrack helper module --protocol <protocol> The network protocol for the netfilter conntrack target rule --port <port> The network port for the netfilter conntrack target rule Table 49.85. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 49.86. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.87. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 49.88. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.26. network l3 conntrack helper delete Delete L3 conntrack helper Usage: Table 49.89. Positional arguments Value Summary <router> Router that the conntrack helper belong to <conntrack-helper-id> The id of the conntrack helper(s) to delete Table 49.90. Command arguments Value Summary -h, --help Show this help message and exit 49.27. network l3 conntrack helper list List L3 conntrack helpers Usage: Table 49.91. Positional arguments Value Summary <router> Router that the conntrack helper belong to Table 49.92. Command arguments Value Summary -h, --help Show this help message and exit --helper <helper> The netfilter conntrack helper module --protocol <protocol> The network protocol for the netfilter conntrack target rule --port <port> The network port for the netfilter conntrack target rule Table 49.93. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 49.94. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 49.95. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.96. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.28. network l3 conntrack helper set Set L3 conntrack helper properties Usage: Table 49.97. Positional arguments Value Summary <router> Router that the conntrack helper belong to <conntrack-helper-id> The id of the conntrack helper(s) Table 49.98. Command arguments Value Summary -h, --help Show this help message and exit --helper <helper> The netfilter conntrack helper module --protocol <protocol> The network protocol for the netfilter conntrack target rule --port <port> The network port for the netfilter conntrack target rule 49.29. network l3 conntrack helper show Display L3 conntrack helper details Usage: Table 49.99. Positional arguments Value Summary <router> Router that the conntrack helper belong to <conntrack-helper-id> The id of the conntrack helper Table 49.100. Command arguments Value Summary -h, --help Show this help message and exit Table 49.101. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 49.102. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.103. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 49.104. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.30. network list List networks Usage: Table 49.105. Command arguments Value Summary -h, --help Show this help message and exit --external List external networks --internal List internal networks --long List additional fields in output --name <name> List networks according to their name --enable List enabled networks --disable List disabled networks --project <project> List networks according to their project (name or id) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --share List networks shared between projects --no-share List networks not shared between projects --status <status> List networks according to their status ( active , BUILD , DOWN , ERROR ) --provider-network-type <provider-network-type> List networks according to their physical mechanisms. The supported options are: flat, geneve, gre, local, vlan, vxlan. --provider-physical-network <provider-physical-network> List networks according to name of the physical network --provider-segment <provider-segment> List networks according to vlan id for vlan networks or Tunnel ID for GENEVE/GRE/VXLAN networks --agent <agent-id> List networks hosted by agent (id only) --tags <tag>[,<tag>,... ] List networks which have all given tag(s) (comma- separated list of tags) --any-tags <tag>[,<tag>,... ] List networks which have any given tag(s) (comma- separated list of tags) --not-tags <tag>[,<tag>,... ] Exclude networks which have all given tag(s) (comma- separated list of tags) --not-any-tags <tag>[,<tag>,... ] Exclude networks which have any given tag(s) (comma- separated list of tags) Table 49.106. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 49.107. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 49.108. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.109. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.31. network log create Create a new network log Usage: Table 49.110. Positional arguments Value Summary <name> Name for the network log Table 49.111. Command arguments Value Summary -h, --help Show this help message and exit --description <description> Description of the network log --enable Enable this log --disable Disable this log (default is enabled) --project <project> Owner's project (name or id) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --event {ALL,ACCEPT,DROP} An event to store with log --resource-type <resource-type> Network log type(s). you can see supported type(s) with following command: USD openstack network loggable resources list --resource <resource> Name or id of resource (security group or firewall group) that used for logging. You can control for logging target combination with --target option. --target <target> Port (name or id) for logging. you can control for logging target combination with --resource option. Table 49.112. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 49.113. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.114. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 49.115. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.32. network log delete Delete network log(s) Usage: Table 49.116. Positional arguments Value Summary <network-log> Network log(s) to delete (name or id) Table 49.117. Command arguments Value Summary -h, --help Show this help message and exit 49.33. network log list List network logs Usage: Table 49.118. Command arguments Value Summary -h, --help Show this help message and exit --long List additional fields in output Table 49.119. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 49.120. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 49.121. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.122. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.34. network log set Set network log properties Usage: Table 49.123. Positional arguments Value Summary <network-log> Network log to set (name or id) Table 49.124. Command arguments Value Summary -h, --help Show this help message and exit --description <description> Description of the network log --enable Enable this log --disable Disable this log (default is enabled) --name <name> Name of the network log 49.35. network log show Display network log details Usage: Table 49.125. Positional arguments Value Summary <network-log> Network log to show (name or id) Table 49.126. Command arguments Value Summary -h, --help Show this help message and exit Table 49.127. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 49.128. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.129. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 49.130. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.36. network loggable resources list List supported loggable resources Usage: Table 49.131. Command arguments Value Summary -h, --help Show this help message and exit --long List additional fields in output Table 49.132. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 49.133. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 49.134. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.135. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.37. network meter create Create network meter Usage: Table 49.136. Positional arguments Value Summary <name> Name of meter Table 49.137. Command arguments Value Summary -h, --help Show this help message and exit --extra-property type=<property_type>,name=<property_name>,value=<property_value> Additional parameters can be passed using this property. Default type of the extra property is string ( str ), but other types can be used as well. Available types are: dict , list , str , bool , int . In case of list type, value can be semicolon-separated list of values. For dict value is semicolon-separated list of the key:value pairs. --description <description> Create description for meter --project <project> Owner's project (name or id) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --share Share meter between projects --no-share Do not share meter between projects Table 49.138. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 49.139. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.140. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 49.141. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.38. network meter delete Delete network meter Usage: Table 49.142. Positional arguments Value Summary <meter> Meter to delete (name or id) Table 49.143. Command arguments Value Summary -h, --help Show this help message and exit 49.39. network meter list List network meters Usage: Table 49.144. Command arguments Value Summary -h, --help Show this help message and exit Table 49.145. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 49.146. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 49.147. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.148. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.40. network meter rule create Create a new meter rule Usage: Table 49.149. Positional arguments Value Summary <meter> Label to associate with this metering rule (name or ID) Table 49.150. Command arguments Value Summary -h, --help Show this help message and exit --extra-property type=<property_type>,name=<property_name>,value=<property_value> Additional parameters can be passed using this property. Default type of the extra property is string ( str ), but other types can be used as well. Available types are: dict , list , str , bool , int . In case of list type, value can be semicolon-separated list of values. For dict value is semicolon-separated list of the key:value pairs. --project <project> Owner's project (name or id) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --exclude Exclude remote ip prefix from traffic count --include Include remote ip prefix from traffic count (default) --ingress Apply rule to incoming network traffic (default) --egress Apply rule to outgoing network traffic --remote-ip-prefix <remote-ip-prefix> The remote ip prefix to associate with this rule --source-ip-prefix <remote-ip-prefix> The source ip prefix to associate with this rule --destination-ip-prefix <remote-ip-prefix> The destination ip prefix to associate with this rule Table 49.151. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 49.152. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.153. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 49.154. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.41. network meter rule delete Delete meter rule(s) Usage: Table 49.155. Positional arguments Value Summary <meter-rule-id> Meter rule to delete (id only) Table 49.156. Command arguments Value Summary -h, --help Show this help message and exit 49.42. network meter rule list List meter rules Usage: Table 49.157. Command arguments Value Summary -h, --help Show this help message and exit Table 49.158. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 49.159. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 49.160. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.161. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.43. network meter rule show Display meter rules details Usage: Table 49.162. Positional arguments Value Summary <meter-rule-id> Meter rule (id only) Table 49.163. Command arguments Value Summary -h, --help Show this help message and exit Table 49.164. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 49.165. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.166. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 49.167. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.44. network meter show Show network meter Usage: Table 49.168. Positional arguments Value Summary <meter> Meter to display (name or id) Table 49.169. Command arguments Value Summary -h, --help Show this help message and exit Table 49.170. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 49.171. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.172. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 49.173. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.45. network onboard subnets Onboard network subnets into a subnet pool Usage: Table 49.174. Positional arguments Value Summary <network> Onboard all subnets associated with this network <subnetpool> Target subnet pool for onboarding subnets Table 49.175. Command arguments Value Summary -h, --help Show this help message and exit 49.46. network qos policy create Create a QoS policy Usage: Table 49.176. Positional arguments Value Summary <name> Name of qos policy to create Table 49.177. Command arguments Value Summary -h, --help Show this help message and exit --extra-property type=<property_type>,name=<property_name>,value=<property_value> Additional parameters can be passed using this property. Default type of the extra property is string ( str ), but other types can be used as well. Available types are: dict , list , str , bool , int . In case of list type, value can be semicolon-separated list of values. For dict value is semicolon-separated list of the key:value pairs. --description <description> Description of the qos policy --share Make the qos policy accessible by other projects --no-share Make the qos policy not accessible by other projects (default) --project <project> Owner's project (name or id) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --default Set this as a default network qos policy --no-default Set this as a non-default network qos policy Table 49.178. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 49.179. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.180. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 49.181. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.47. network qos policy delete Delete Qos Policy(s) Usage: Table 49.182. Positional arguments Value Summary <qos-policy> Qos policy(s) to delete (name or id) Table 49.183. Command arguments Value Summary -h, --help Show this help message and exit 49.48. network qos policy list List QoS policies Usage: Table 49.184. Command arguments Value Summary -h, --help Show this help message and exit --project <project> List qos policies according to their project (name or ID) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --share List qos policies shared between projects --no-share List qos policies not shared between projects Table 49.185. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 49.186. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 49.187. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.188. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.49. network qos policy set Set QoS policy properties Usage: Table 49.189. Positional arguments Value Summary <qos-policy> Qos policy to modify (name or id) Table 49.190. Command arguments Value Summary -h, --help Show this help message and exit --extra-property type=<property_type>,name=<property_name>,value=<property_value> Additional parameters can be passed using this property. Default type of the extra property is string ( str ), but other types can be used as well. Available types are: dict , list , str , bool , int . In case of list type, value can be semicolon-separated list of values. For dict value is semicolon-separated list of the key:value pairs. --name <name> Set qos policy name --description <description> Description of the qos policy --share Make the qos policy accessible by other projects --no-share Make the qos policy not accessible by other projects --default Set this as a default network qos policy --no-default Set this as a non-default network qos policy 49.50. network qos policy show Display QoS policy details Usage: Table 49.191. Positional arguments Value Summary <qos-policy> Qos policy to display (name or id) Table 49.192. Command arguments Value Summary -h, --help Show this help message and exit Table 49.193. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 49.194. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.195. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 49.196. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.51. network qos rule create Create new Network QoS rule Usage: Table 49.197. Positional arguments Value Summary <qos-policy> Qos policy that contains the rule (name or id) Table 49.198. Command arguments Value Summary -h, --help Show this help message and exit --extra-property type=<property_type>,name=<property_name>,value=<property_value> Additional parameters can be passed using this property. Default type of the extra property is string ( str ), but other types can be used as well. Available types are: dict , list , str , bool , int . In case of list type, value can be semicolon-separated list of values. For dict value is semicolon-separated list of the key:value pairs. --type <type> Qos rule type (minimum-bandwidth, minimum-packet-rate, dscp-marking, bandwidth-limit) --max-kbps <max-kbps> Maximum bandwidth in kbps --max-burst-kbits <max-burst-kbits> Maximum burst in kilobits, 0 or not specified means automatic, which is 80% of the bandwidth limit, which works for typical TCP traffic. For details check the QoS user workflow. --dscp-mark <dscp-mark> Dscp mark: value can be 0, even numbers from 8-56, excluding 42, 44, 50, 52, and 54 --min-kbps <min-kbps> Minimum guaranteed bandwidth in kbps --min-kpps <min-kpps> Minimum guaranteed packet rate in kpps --ingress Ingress traffic direction from the project point of view --egress Egress traffic direction from the project point of view --any Any traffic direction from the project point of view. Can be used only with minimum packet rate rule. Table 49.199. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 49.200. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.201. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 49.202. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.52. network qos rule delete Delete Network QoS rule Usage: Table 49.203. Positional arguments Value Summary <qos-policy> Qos policy that contains the rule (name or id) <rule-id> Network qos rule to delete (id) Table 49.204. Command arguments Value Summary -h, --help Show this help message and exit 49.53. network qos rule list List Network QoS rules Usage: Table 49.205. Positional arguments Value Summary <qos-policy> Qos policy that contains the rule (name or id) Table 49.206. Command arguments Value Summary -h, --help Show this help message and exit Table 49.207. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 49.208. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 49.209. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.210. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.54. network qos rule set Set Network QoS rule properties Usage: Table 49.211. Positional arguments Value Summary <qos-policy> Qos policy that contains the rule (name or id) <rule-id> Network qos rule to delete (id) Table 49.212. Command arguments Value Summary -h, --help Show this help message and exit --extra-property type=<property_type>,name=<property_name>,value=<property_value> Additional parameters can be passed using this property. Default type of the extra property is string ( str ), but other types can be used as well. Available types are: dict , list , str , bool , int . In case of list type, value can be semicolon-separated list of values. For dict value is semicolon-separated list of the key:value pairs. --max-kbps <max-kbps> Maximum bandwidth in kbps --max-burst-kbits <max-burst-kbits> Maximum burst in kilobits, 0 or not specified means automatic, which is 80% of the bandwidth limit, which works for typical TCP traffic. For details check the QoS user workflow. --dscp-mark <dscp-mark> Dscp mark: value can be 0, even numbers from 8-56, excluding 42, 44, 50, 52, and 54 --min-kbps <min-kbps> Minimum guaranteed bandwidth in kbps --min-kpps <min-kpps> Minimum guaranteed packet rate in kpps --ingress Ingress traffic direction from the project point of view --egress Egress traffic direction from the project point of view --any Any traffic direction from the project point of view. Can be used only with minimum packet rate rule. 49.55. network qos rule show Display Network QoS rule details Usage: Table 49.213. Positional arguments Value Summary <qos-policy> Qos policy that contains the rule (name or id) <rule-id> Network qos rule to delete (id) Table 49.214. Command arguments Value Summary -h, --help Show this help message and exit Table 49.215. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 49.216. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.217. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 49.218. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.56. network qos rule type list List QoS rule types Usage: Table 49.219. Command arguments Value Summary -h, --help Show this help message and exit --all-supported List all the qos rule types supported by any loaded mechanism drivers (the union of all sets of supported rules) --all-rules List all qos rule types implemented in neutron qos driver Table 49.220. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 49.221. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 49.222. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.223. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.57. network qos rule type show Show details about supported QoS rule type Usage: Table 49.224. Positional arguments Value Summary <qos-rule-type-name> Name of qos rule type Table 49.225. Command arguments Value Summary -h, --help Show this help message and exit Table 49.226. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 49.227. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.228. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 49.229. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.58. network rbac create Create network RBAC policy Usage: Table 49.230. Positional arguments Value Summary <rbac-object> The object to which this rbac policy affects (name or ID) Table 49.231. Command arguments Value Summary -h, --help Show this help message and exit --extra-property type=<property_type>,name=<property_name>,value=<property_value> Additional parameters can be passed using this property. Default type of the extra property is string ( str ), but other types can be used as well. Available types are: dict , list , str , bool , int . In case of list type, value can be semicolon-separated list of values. For dict value is semicolon-separated list of the key:value pairs. --type <type> Type of the object that rbac policy affects ("address_group", "address_scope", "security_group", "subnetpool", "qos_policy" or "network") --action <action> Action for the rbac policy ("access_as_external" or "access_as_shared") --target-project <target-project> The project to which the rbac policy will be enforced (name or ID) --target-all-projects Allow creating rbac policy for all projects. --target-project-domain <target-project-domain> Domain the target project belongs to (name or id). This can be used in case collisions between project names exist. --project <project> The owner project (name or id) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. Table 49.232. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 49.233. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.234. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 49.235. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.59. network rbac delete Delete network RBAC policy(s) Usage: Table 49.236. Positional arguments Value Summary <rbac-policy> Rbac policy(s) to delete (id only) Table 49.237. Command arguments Value Summary -h, --help Show this help message and exit 49.60. network rbac list List network RBAC policies Usage: Table 49.238. Command arguments Value Summary -h, --help Show this help message and exit --type <type> List network rbac policies according to given object type ("address_group", "address_scope", "security_group", "subnetpool", "qos_policy" or "network") --action <action> List network rbac policies according to given action ("access_as_external" or "access_as_shared") --target-project <target-project> List network rbac policies for a specific target project --long List additional fields in output Table 49.239. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 49.240. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 49.241. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.242. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.61. network rbac set Set network RBAC policy properties Usage: Table 49.243. Positional arguments Value Summary <rbac-policy> Rbac policy to be modified (id only) Table 49.244. Command arguments Value Summary -h, --help Show this help message and exit --extra-property type=<property_type>,name=<property_name>,value=<property_value> Additional parameters can be passed using this property. Default type of the extra property is string ( str ), but other types can be used as well. Available types are: dict , list , str , bool , int . In case of list type, value can be semicolon-separated list of values. For dict value is semicolon-separated list of the key:value pairs. --target-project <target-project> The project to which the rbac policy will be enforced (name or ID) --target-project-domain <target-project-domain> Domain the target project belongs to (name or id). This can be used in case collisions between project names exist. 49.62. network rbac show Display network RBAC policy details Usage: Table 49.245. Positional arguments Value Summary <rbac-policy> Rbac policy (id only) Table 49.246. Command arguments Value Summary -h, --help Show this help message and exit Table 49.247. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 49.248. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.249. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 49.250. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.63. network segment create Create new network segment Usage: Table 49.251. Positional arguments Value Summary <name> New network segment name Table 49.252. Command arguments Value Summary -h, --help Show this help message and exit --extra-property type=<property_type>,name=<property_name>,value=<property_value> Additional parameters can be passed using this property. Default type of the extra property is string ( str ), but other types can be used as well. Available types are: dict , list , str , bool , int . In case of list type, value can be semicolon-separated list of values. For dict value is semicolon-separated list of the key:value pairs. --description <description> Network segment description --physical-network <physical-network> Physical network name of this network segment --segment <segment> Segment identifier for this network segment which is based on the network type, VLAN ID for vlan network type and tunnel ID for geneve, gre and vxlan network types --network <network> Network this network segment belongs to (name or id) --network-type <network-type> Network type of this network segment (flat, geneve, gre, local, vlan or vxlan) Table 49.253. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 49.254. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.255. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 49.256. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.64. network segment delete Delete network segment(s) Usage: Table 49.257. Positional arguments Value Summary <network-segment> Network segment(s) to delete (name or id) Table 49.258. Command arguments Value Summary -h, --help Show this help message and exit 49.65. network segment list List network segments Usage: Table 49.259. Command arguments Value Summary -h, --help Show this help message and exit --long List additional fields in output --network <network> List network segments that belong to this network (name or ID) Table 49.260. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 49.261. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 49.262. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.263. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.66. network segment range create Create new network segment range Usage: Table 49.264. Positional arguments Value Summary <name> Name of new network segment range Table 49.265. Command arguments Value Summary -h, --help Show this help message and exit --extra-property type=<property_type>,name=<property_name>,value=<property_value> Additional parameters can be passed using this property. Default type of the extra property is string ( str ), but other types can be used as well. Available types are: dict , list , str , bool , int . In case of list type, value can be semicolon-separated list of values. For dict value is semicolon-separated list of the key:value pairs. --private Network segment range is assigned specifically to the project --shared Network segment range is shared with other projects --project <project> Network segment range owner (name or id). optional when the segment range is shared --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --network-type <network-type> Network type of this network segment range (geneve, gre, vlan or vxlan) --physical-network <physical-network-name> Physical network name of this network segment range --minimum <minimum-segmentation-id> Minimum segment identifier for this network segment range which is based on the network type, VLAN ID for vlan network type and tunnel ID for geneve, gre and vxlan network types --maximum <maximum-segmentation-id> Maximum segment identifier for this network segment range which is based on the network type, VLAN ID for vlan network type and tunnel ID for geneve, gre and vxlan network types Table 49.266. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 49.267. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.268. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 49.269. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.67. network segment range delete Delete network segment range(s) Usage: Table 49.270. Positional arguments Value Summary <network-segment-range> Network segment range(s) to delete (name or id) Table 49.271. Command arguments Value Summary -h, --help Show this help message and exit 49.68. network segment range list List network segment ranges Usage: Table 49.272. Command arguments Value Summary -h, --help Show this help message and exit --long List additional fields in output --used List network segment ranges that have segments in use --unused List network segment ranges that have segments not in use --available List network segment ranges that have available segments --unavailable List network segment ranges without available segments Table 49.273. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 49.274. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 49.275. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.276. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.69. network segment range set Set network segment range properties Usage: Table 49.277. Positional arguments Value Summary <network-segment-range> Network segment range to modify (name or id) Table 49.278. Command arguments Value Summary -h, --help Show this help message and exit --extra-property type=<property_type>,name=<property_name>,value=<property_value> Additional parameters can be passed using this property. Default type of the extra property is string ( str ), but other types can be used as well. Available types are: dict , list , str , bool , int . In case of list type, value can be semicolon-separated list of values. For dict value is semicolon-separated list of the key:value pairs. --name <name> Set network segment name --minimum <minimum-segmentation-id> Set network segment range minimum segment identifier --maximum <maximum-segmentation-id> Set network segment range maximum segment identifier 49.70. network segment range show Display network segment range details Usage: Table 49.279. Positional arguments Value Summary <network-segment-range> Network segment range to display (name or id) Table 49.280. Command arguments Value Summary -h, --help Show this help message and exit Table 49.281. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 49.282. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.283. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 49.284. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.71. network segment set Set network segment properties Usage: Table 49.285. Positional arguments Value Summary <network-segment> Network segment to modify (name or id) Table 49.286. Command arguments Value Summary -h, --help Show this help message and exit --extra-property type=<property_type>,name=<property_name>,value=<property_value> Additional parameters can be passed using this property. Default type of the extra property is string ( str ), but other types can be used as well. Available types are: dict , list , str , bool , int . In case of list type, value can be semicolon-separated list of values. For dict value is semicolon-separated list of the key:value pairs. --description <description> Set network segment description --name <name> Set network segment name 49.72. network segment show Display network segment details Usage: Table 49.287. Positional arguments Value Summary <network-segment> Network segment to display (name or id) Table 49.288. Command arguments Value Summary -h, --help Show this help message and exit Table 49.289. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 49.290. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.291. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 49.292. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.73. network service provider list List Service Providers Usage: Table 49.293. Command arguments Value Summary -h, --help Show this help message and exit Table 49.294. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 49.295. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 49.296. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.297. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.74. network set Set network properties Usage: Table 49.298. Positional arguments Value Summary <network> Network to modify (name or id) Table 49.299. Command arguments Value Summary -h, --help Show this help message and exit --extra-property type=<property_type>,name=<property_name>,value=<property_value> Additional parameters can be passed using this property. Default type of the extra property is string ( str ), but other types can be used as well. Available types are: dict , list , str , bool , int . In case of list type, value can be semicolon-separated list of values. For dict value is semicolon-separated list of the key:value pairs. --name <name> Set network name --enable Enable network --disable Disable network --share Share the network between projects --no-share Do not share the network between projects --description <description> Set network description --mtu <mtu> Set network mtu --enable-port-security Enable port security by default for ports created on this network --disable-port-security Disable port security by default for ports created on this network --external The network has an external routing facility that's not managed by Neutron and can be used as in: openstack router set --external-gateway NETWORK (external-net extension required) --internal Opposite of --external --default Set the network as the default external network --no-default Do not use the network as the default external network --qos-policy <qos-policy> Qos policy to attach to this network (name or id) --no-qos-policy Remove the qos policy attached to this network --tag <tag> Tag to be added to the network (repeat option to set multiple tags) --no-tag Clear tags associated with the network. specify both --tag and --no-tag to overwrite current tags --provider-network-type <provider-network-type> The physical mechanism by which the virtual network is implemented. For example: flat, geneve, gre, local, vlan, vxlan. --provider-physical-network <provider-physical-network> Name of the physical network over which the virtual network is implemented --provider-segment <provider-segment> Vlan id for vlan networks or tunnel id for GENEVE/GRE/VXLAN networks --dns-domain <dns-domain> Set dns domain for this network (requires dns integration extension) 49.75. network show Show network details Usage: Table 49.300. Positional arguments Value Summary <network> Network to display (name or id) Table 49.301. Command arguments Value Summary -h, --help Show this help message and exit Table 49.302. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 49.303. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.304. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 49.305. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.76. network subport list List all subports for a given network trunk Usage: Table 49.306. Command arguments Value Summary -h, --help Show this help message and exit --trunk <trunk> List subports belonging to this trunk (name or id) Table 49.307. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 49.308. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 49.309. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.310. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.77. network trunk create Create a network trunk for a given project Usage: Table 49.311. Positional arguments Value Summary <name> Name of the trunk to create Table 49.312. Command arguments Value Summary -h, --help Show this help message and exit --description <description> A description of the trunk --parent-port <parent-port> Parent port belonging to this trunk (name or id) --subport <port=,segmentation-type=,segmentation-id⇒ Subport to add. subport is of form port=<name or ID>,segmentation-type=<segmentation- type>,segmentation-id=<segmentation-ID> (--subport) option can be repeated --enable Enable trunk (default) --disable Disable trunk --project <project> Owner's project (name or id) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. Table 49.313. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 49.314. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.315. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 49.316. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.78. network trunk delete Delete a given network trunk Usage: Table 49.317. Positional arguments Value Summary <trunk> Trunk(s) to delete (name or id) Table 49.318. Command arguments Value Summary -h, --help Show this help message and exit 49.79. network trunk list List all network trunks Usage: Table 49.319. Command arguments Value Summary -h, --help Show this help message and exit --long List additional fields in output Table 49.320. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 49.321. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 49.322. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.323. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.80. network trunk set Set network trunk properties Usage: Table 49.324. Positional arguments Value Summary <trunk> Trunk to modify (name or id) Table 49.325. Command arguments Value Summary -h, --help Show this help message and exit --name <name> Set trunk name --description <description> A description of the trunk --subport <port=,segmentation-type=,segmentation-id⇒ Subport to add. subport is of form port=<name or ID>,segmentation-type=<segmentation- type>,segmentation-id=<segmentation-ID> (--subport) option can be repeated --enable Enable trunk --disable Disable trunk 49.81. network trunk show Show information of a given network trunk Usage: Table 49.326. Positional arguments Value Summary <trunk> Trunk to display (name or id) Table 49.327. Command arguments Value Summary -h, --help Show this help message and exit Table 49.328. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 49.329. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.330. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 49.331. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.82. network trunk unset Unset subports from a given network trunk Usage: Table 49.332. Positional arguments Value Summary <trunk> Unset subports from this trunk (name or id) Table 49.333. Command arguments Value Summary -h, --help Show this help message and exit --subport <subport> Subport to delete (name or id of the port) (--subport) option can be repeated 49.83. network unset Unset network properties Usage: Table 49.334. Positional arguments Value Summary <network> Network to modify (name or id) Table 49.335. Command arguments Value Summary -h, --help Show this help message and exit --extra-property type=<property_type>,name=<property_name>,value=<property_value> Additional parameters can be passed using this property. Default type of the extra property is string ( str ), but other types can be used as well. Available types are: dict , list , str , bool , int . In case of list type, value can be semicolon-separated list of values. For dict value is semicolon-separated list of the key:value pairs. --tag <tag> Tag to be removed from the network (repeat option to remove multiple tags) --all-tag Clear all tags associated with the network
[ "openstack network agent add network [-h] [--dhcp] <agent-id> <network>", "openstack network agent add router [-h] [--l3] <agent-id> <router>", "openstack network agent delete [-h] <network-agent> [<network-agent> ...]", "openstack network agent list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--agent-type <agent-type>] [--host <host>] [--network <network> | --router <router>] [--long]", "openstack network agent remove network [-h] [--dhcp] <agent-id> <network>", "openstack network agent remove router [-h] [--l3] <agent-id> <router>", "openstack network agent set [-h] [--description <description>] [--enable | --disable] <network-agent>", "openstack network agent show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <network-agent>", "openstack network auto allocated topology create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--project <project>] [--project-domain <project-domain>] [--check-resources] [--or-show]", "openstack network auto allocated topology delete [-h] [--project <project>] [--project-domain <project-domain>]", "openstack network create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--extra-property type=<property_type>,name=<property_name>,value=<property_value>] [--share | --no-share] [--enable | --disable] [--project <project>] [--description <description>] [--mtu <mtu>] [--project-domain <project-domain>] [--availability-zone-hint <availability-zone>] [--enable-port-security | --disable-port-security] [--external | --internal] [--default | --no-default] [--qos-policy <qos-policy>] [--transparent-vlan | --no-transparent-vlan] [--provider-network-type <provider-network-type>] [--provider-physical-network <provider-physical-network>] [--provider-segment <provider-segment>] [--dns-domain <dns-domain>] [--tag <tag> | --no-tag] <name>", "openstack network delete [-h] <network> [<network> ...]", "openstack network flavor add profile [-h] <flavor> <service-profile>", "openstack network flavor create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--extra-property type=<property_type>,name=<property_name>,value=<property_value>] --service-type <service-type> [--description DESCRIPTION] [--project <project>] [--project-domain <project-domain>] [--enable | --disable] <name>", "openstack network flavor delete [-h] <flavor> [<flavor> ...]", "openstack network flavor list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending]", "openstack network flavor profile create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--extra-property type=<property_type>,name=<property_name>,value=<property_value>] [--project <project>] [--project-domain <project-domain>] [--description <description>] [--enable | --disable] [--driver DRIVER] [--metainfo METAINFO]", "openstack network flavor profile delete [-h] <flavor-profile> [<flavor-profile> ...]", "openstack network flavor profile list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending]", "openstack network flavor profile set [-h] [--extra-property type=<property_type>,name=<property_name>,value=<property_value>] [--project-domain <project-domain>] [--description <description>] [--enable | --disable] [--driver DRIVER] [--metainfo METAINFO] <flavor-profile>", "openstack network flavor profile show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <flavor-profile>", "openstack network flavor remove profile [-h] <flavor> <service-profile>", "openstack network flavor set [-h] [--extra-property type=<property_type>,name=<property_name>,value=<property_value>] [--description DESCRIPTION] [--disable | --enable] [--name <name>] <flavor>", "openstack network flavor show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <flavor>", "openstack network l3 conntrack helper create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] --helper <helper> --protocol <protocol> --port <port> <router>", "openstack network l3 conntrack helper delete [-h] <router> <conntrack-helper-id> [<conntrack-helper-id> ...]", "openstack network l3 conntrack helper list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--helper <helper>] [--protocol <protocol>] [--port <port>] <router>", "openstack network l3 conntrack helper set [-h] [--helper <helper>] [--protocol <protocol>] [--port <port>] <router> <conntrack-helper-id>", "openstack network l3 conntrack helper show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <router> <conntrack-helper-id>", "openstack network list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--external | --internal] [--long] [--name <name>] [--enable | --disable] [--project <project>] [--project-domain <project-domain>] [--share | --no-share] [--status <status>] [--provider-network-type <provider-network-type>] [--provider-physical-network <provider-physical-network>] [--provider-segment <provider-segment>] [--agent <agent-id>] [--tags <tag>[,<tag>,...]] [--any-tags <tag>[,<tag>,...]] [--not-tags <tag>[,<tag>,...]] [--not-any-tags <tag>[,<tag>,...]]", "openstack network log create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--description <description>] [--enable | --disable] [--project <project>] [--project-domain <project-domain>] [--event {ALL,ACCEPT,DROP}] --resource-type <resource-type> [--resource <resource>] [--target <target>] <name>", "openstack network log delete [-h] <network-log> [<network-log> ...]", "openstack network log list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--long]", "openstack network log set [-h] [--description <description>] [--enable | --disable] [--name <name>] <network-log>", "openstack network log show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <network-log>", "openstack network loggable resources list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--long]", "openstack network meter create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--extra-property type=<property_type>,name=<property_name>,value=<property_value>] [--description <description>] [--project <project>] [--project-domain <project-domain>] [--share | --no-share] <name>", "openstack network meter delete [-h] <meter> [<meter> ...]", "openstack network meter list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending]", "openstack network meter rule create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--extra-property type=<property_type>,name=<property_name>,value=<property_value>] [--project <project>] [--project-domain <project-domain>] [--exclude | --include] [--ingress | --egress] [--remote-ip-prefix <remote-ip-prefix>] [--source-ip-prefix <remote-ip-prefix>] [--destination-ip-prefix <remote-ip-prefix>] <meter>", "openstack network meter rule delete [-h] <meter-rule-id> [<meter-rule-id> ...]", "openstack network meter rule list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending]", "openstack network meter rule show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <meter-rule-id>", "openstack network meter show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <meter>", "openstack network onboard subnets [-h] <network> <subnetpool>", "openstack network qos policy create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--extra-property type=<property_type>,name=<property_name>,value=<property_value>] [--description <description>] [--share | --no-share] [--project <project>] [--project-domain <project-domain>] [--default | --no-default] <name>", "openstack network qos policy delete [-h] <qos-policy> [<qos-policy> ...]", "openstack network qos policy list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--project <project>] [--project-domain <project-domain>] [--share | --no-share]", "openstack network qos policy set [-h] [--extra-property type=<property_type>,name=<property_name>,value=<property_value>] [--name <name>] [--description <description>] [--share | --no-share] [--default | --no-default] <qos-policy>", "openstack network qos policy show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <qos-policy>", "openstack network qos rule create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--extra-property type=<property_type>,name=<property_name>,value=<property_value>] --type <type> [--max-kbps <max-kbps>] [--max-burst-kbits <max-burst-kbits>] [--dscp-mark <dscp-mark>] [--min-kbps <min-kbps>] [--min-kpps <min-kpps>] [--ingress | --egress | --any] <qos-policy>", "openstack network qos rule delete [-h] <qos-policy> <rule-id>", "openstack network qos rule list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] <qos-policy>", "openstack network qos rule set [-h] [--extra-property type=<property_type>,name=<property_name>,value=<property_value>] [--max-kbps <max-kbps>] [--max-burst-kbits <max-burst-kbits>] [--dscp-mark <dscp-mark>] [--min-kbps <min-kbps>] [--min-kpps <min-kpps>] [--ingress | --egress | --any] <qos-policy> <rule-id>", "openstack network qos rule show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <qos-policy> <rule-id>", "openstack network qos rule type list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--all-supported | --all-rules]", "openstack network qos rule type show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <qos-rule-type-name>", "openstack network rbac create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--extra-property type=<property_type>,name=<property_name>,value=<property_value>] --type <type> --action <action> (--target-project <target-project> | --target-all-projects) [--target-project-domain <target-project-domain>] [--project <project>] [--project-domain <project-domain>] <rbac-object>", "openstack network rbac delete [-h] <rbac-policy> [<rbac-policy> ...]", "openstack network rbac list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--type <type>] [--action <action>] [--target-project <target-project>] [--long]", "openstack network rbac set [-h] [--extra-property type=<property_type>,name=<property_name>,value=<property_value>] [--target-project <target-project>] [--target-project-domain <target-project-domain>] <rbac-policy>", "openstack network rbac show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <rbac-policy>", "openstack network segment create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--extra-property type=<property_type>,name=<property_name>,value=<property_value>] [--description <description>] [--physical-network <physical-network>] [--segment <segment>] --network <network> --network-type <network-type> <name>", "openstack network segment delete [-h] <network-segment> [<network-segment> ...]", "openstack network segment list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--long] [--network <network>]", "openstack network segment range create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--extra-property type=<property_type>,name=<property_name>,value=<property_value>] [--private | --shared] [--project <project>] [--project-domain <project-domain>] --network-type <network-type> [--physical-network <physical-network-name>] --minimum <minimum-segmentation-id> --maximum <maximum-segmentation-id> <name>", "openstack network segment range delete [-h] <network-segment-range> [<network-segment-range> ...]", "openstack network segment range list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--long] [--used | --unused] [--available | --unavailable]", "openstack network segment range set [-h] [--extra-property type=<property_type>,name=<property_name>,value=<property_value>] [--name <name>] [--minimum <minimum-segmentation-id>] [--maximum <maximum-segmentation-id>] <network-segment-range>", "openstack network segment range show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <network-segment-range>", "openstack network segment set [-h] [--extra-property type=<property_type>,name=<property_name>,value=<property_value>] [--description <description>] [--name <name>] <network-segment>", "openstack network segment show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <network-segment>", "openstack network service provider list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending]", "openstack network set [-h] [--extra-property type=<property_type>,name=<property_name>,value=<property_value>] [--name <name>] [--enable | --disable] [--share | --no-share] [--description <description>] [--mtu <mtu>] [--enable-port-security | --disable-port-security] [--external | --internal] [--default | --no-default] [--qos-policy <qos-policy> | --no-qos-policy] [--tag <tag>] [--no-tag] [--provider-network-type <provider-network-type>] [--provider-physical-network <provider-physical-network>] [--provider-segment <provider-segment>] [--dns-domain <dns-domain>] <network>", "openstack network show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <network>", "openstack network subport list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] --trunk <trunk>", "openstack network trunk create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--description <description>] --parent-port <parent-port> [--subport <port=,segmentation-type=,segmentation-id=>] [--enable | --disable] [--project <project>] [--project-domain <project-domain>] <name>", "openstack network trunk delete [-h] <trunk> [<trunk> ...]", "openstack network trunk list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--long]", "openstack network trunk set [-h] [--name <name>] [--description <description>] [--subport <port=,segmentation-type=,segmentation-id=>] [--enable | --disable] <trunk>", "openstack network trunk show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <trunk>", "openstack network trunk unset [-h] --subport <subport> <trunk>", "openstack network unset [-h] [--extra-property type=<property_type>,name=<property_name>,value=<property_value>] [--tag <tag> | --all-tag] <network>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/command_line_interface_reference/network
Chapter 338. ContextIdListing enabled
Chapter 338. ContextIdListing enabled When contextIdListing is enabled then its detecting all the running CamelContexts in the same JVM. These contexts are listed in the root path, eg /api-docs as a simple list of names in json format. To access the OpenApi documentation then the context-path must be appended with the Camel context id, such as api-docs/myCamel . The option apiContextIdPattern can be used to filter the names in this list.
null
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/contextidlisting_enabled
Chapter 14. Uninstalling the Migration Toolkit for Virtualization
Chapter 14. Uninstalling the Migration Toolkit for Virtualization You can uninstall the Migration Toolkit for Virtualization (MTV) by using the Red Hat OpenShift web console or the command-line interface (CLI). 14.1. Uninstalling MTV by using the Red Hat OpenShift web console You can uninstall Migration Toolkit for Virtualization (MTV) by using the Red Hat OpenShift web console. Prerequisites You must be logged in as a user with cluster-admin privileges. Procedure In the Red Hat OpenShift web console, click Operators > Installed Operators . Click Migration Toolkit for Virtualization Operator . The Operator Details page opens in the Details tab. Click the ForkliftController tab. Click Actions and select Delete ForkLiftController . A confirmation window opens. Click Delete . The controller is removed. Open the Details tab. The Create ForkliftController button appears instead of the controller you deleted. There is no need to click it. On the upper-right side of the page, click Actions and select Uninstall Operator . A confirmation window opens, displaying any operand instances. To delete all instances, select the Delete all operand instances for this operator checkbox. By default, the checkbox is cleared. Important If your Operator configured off-cluster resources, these will continue to run and will require manual cleanup. Click Uninstall . The Installed Operators page opens, and the Migration Toolkit for Virtualization Operator is removed from the list of installed Operators. Click Home > Overview . In the Status section of the page, click Dynamic Plugins . The Dynamic Plugins popup opens, listing forklift-console-plugin as a failed plugin. If the forklift-console-plugin does not appear as a failed plugin, refresh the web console. Click forklift-console-plugin . The ConsolePlugin details page opens in the Details tab. On the upper right-hand side of the page, click Actions and select Delete ConsolePlugin from the list. A confirmation window opens. Click Delete . The plugin is removed from the list of Dynamic plugins on the Overview page. If the plugin still appears, restart the Overview page. 14.2. Uninstalling MTV from the command line You can uninstall Migration Toolkit for Virtualization (MTV) from the command line. Note This action does not remove resources managed by the MTV Operator, including custom resource definitions (CRDs) and custom resources (CRs). To remove these after uninstalling the MTV Operator, you might need to manually delete the MTV Operator CRDs. Prerequisites You must be logged in as a user with cluster-admin privileges. Procedure Delete the forklift controller by running the following command: USD oc delete ForkliftController --all -n openshift-mtv Delete the subscription to the MTV Operator by running the following command: USD oc get subscription -o name|grep 'mtv-operator'| xargs oc delete Delete the clusterserviceversion for the MTV Operator by running the following command: USD oc get clusterserviceversion -o name|grep 'mtv-operator'| xargs oc delete Delete the plugin console CR by running the following command: USD oc delete ConsolePlugin forklift-console-plugin Optional: Delete the custom resource definitions (CRDs) by running the following command: oc get crd -o name | grep 'forklift.konveyor.io' | xargs oc delete Optional: Perform cleanup by deleting the MTV project by running the following command: oc delete project openshift-mtv
[ "oc delete ForkliftController --all -n openshift-mtv", "oc get subscription -o name|grep 'mtv-operator'| xargs oc delete", "oc get clusterserviceversion -o name|grep 'mtv-operator'| xargs oc delete", "oc delete ConsolePlugin forklift-console-plugin", "get crd -o name | grep 'forklift.konveyor.io' | xargs oc delete", "delete project openshift-mtv" ]
https://docs.redhat.com/en/documentation/migration_toolkit_for_virtualization/2.7/html/installing_and_using_the_migration_toolkit_for_virtualization/uninstalling-mtv_mtv
Chapter 16. CredentialExpiryService
Chapter 16. CredentialExpiryService 16.1. GetCertExpiry GET /v1/credentialexpiry GetCertExpiry returns information related to the expiry component mTLS certificate. 16.1.1. Description 16.1.2. Parameters 16.1.2.1. Query Parameters Name Description Required Default Pattern component - UNKNOWN 16.1.3. Return Type V1GetCertExpiryResponse 16.1.4. Content Type application/json 16.1.5. Responses Table 16.1. HTTP Response Codes Code Message Datatype 200 A successful response. V1GetCertExpiryResponse 0 An unexpected error response. GooglerpcStatus 16.1.6. Samples 16.1.7. Common object reference 16.1.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 16.1.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 16.1.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 16.1.7.3. V1GetCertExpiryResponse Field Name Required Nullable Type Description Format expiry Date date-time
[ "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }" ]
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/api_reference/credentialexpiryservice
function::print_backtrace
function::print_backtrace Name function::print_backtrace - Print kernel stack back trace Synopsis Arguments None Description This function is equivalent to print_stack( backtrace ), except that deeper stack nesting may be supported. See print_ubacktrace for user-space backtrace. The function does not return a value.
[ "print_backtrace()" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-print-backtrace
Chapter 41. identity
Chapter 41. identity This chapter describes the commands under the identity command. 41.1. identity provider create Create new identity provider Usage: Table 41.1. Positional arguments Value Summary <name> New identity provider name (must be unique) Table 41.2. Command arguments Value Summary -h, --help Show this help message and exit --remote-id <remote-id> Remote ids to associate with the identity provider (repeat option to provide multiple values) --remote-id-file <file-name> Name of a file that contains many remote ids to associate with the identity provider, one per line --description <description> New identity provider description --domain <domain> Domain to associate with the identity provider. if not specified, a domain will be created automatically. (Name or ID) --enable Enable identity provider (default) --disable Disable the identity provider Table 41.3. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 41.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 41.5. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 41.6. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 41.2. identity provider delete Delete identity provider(s) Usage: Table 41.7. Positional arguments Value Summary <identity-provider> Identity provider(s) to delete Table 41.8. Command arguments Value Summary -h, --help Show this help message and exit 41.3. identity provider list List identity providers Usage: Table 41.9. Command arguments Value Summary -h, --help Show this help message and exit --id <id> The identity providers' id attribute --enabled The identity providers that are enabled will be returned Table 41.10. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 41.11. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 41.12. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 41.13. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 41.4. identity provider set Set identity provider properties Usage: Table 41.14. Positional arguments Value Summary <identity-provider> Identity provider to modify Table 41.15. Command arguments Value Summary -h, --help Show this help message and exit --description <description> Set identity provider description --remote-id <remote-id> Remote ids to associate with the identity provider (repeat option to provide multiple values) --remote-id-file <file-name> Name of a file that contains many remote ids to associate with the identity provider, one per line --enable Enable the identity provider --disable Disable the identity provider 41.5. identity provider show Display identity provider details Usage: Table 41.16. Positional arguments Value Summary <identity-provider> Identity provider to display Table 41.17. Command arguments Value Summary -h, --help Show this help message and exit Table 41.18. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 41.19. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 41.20. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 41.21. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
[ "openstack identity provider create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--remote-id <remote-id> | --remote-id-file <file-name>] [--description <description>] [--domain <domain>] [--enable | --disable] <name>", "openstack identity provider delete [-h] <identity-provider> [<identity-provider> ...]", "openstack identity provider list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--id <id>] [--enabled]", "openstack identity provider set [-h] [--description <description>] [--remote-id <remote-id> | --remote-id-file <file-name>] [--enable | --disable] <identity-provider>", "openstack identity provider show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <identity-provider>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/command_line_interface_reference/identity
Chapter 2. Eclipse Temurin features
Chapter 2. Eclipse Temurin features Eclipse Temurin does not contain structural changes from the upstream distribution of OpenJDK. For the list of changes and security fixes included in the latest OpenJDK 11.0.18 release of Eclipse Temurin, see OpenJDK 11.0.18 Released . New features and enhancements Review the following release notes to understand new features and feature enhancements included with the Eclipse Temurin 11.0.18 release: Enhanced BMP bounds By default, OpenJDK 11.0.18 disables loading a linked International Color Consortium (ICC) profile in a BMP image. You can enable this functionality by setting the new sun.imageio.bmp.enabledLinkedProfiles property to true . This property replaces the old sun.imageio.plugins.bmp.disableLinkedProfiles property See JDK-8295687 (JDK Bug System) . Improved banking of sounds Previously, the SoundbankReader implementation, com.sun.media.sound.JARSoundbankReader , downloaded a JAR soundbank from a URL. For OpenJDK 11.0.18, this behavior is now disabled by default. To re-enable the behavior, set the new system property jdk.sound.jarsoundbank to true . See JDK-8293742 (JDK Bug System) . Enhanced Datagram Transport Layer Security (DTLS) performance OpenJDK now exchanges DTLS cookies for all new and resumed handshake communications. To re-enable the release behavior, set the new system property jdk.tls.enableDtlsResumeCookie to false . See JDK-8287411 (JDK Bug System) . SunMSCAPI provider supports new Microsoft Windows keystore types The SunMSCAPI provider supports the following Microsoft Windows keystore types where you must append your local namespace to Windows- : Windows-MY-LOCALMACHINE Windows-ROOT-LOCALMACHINE Windows-MY-CURRENTUSER Windows-ROOT-CURRENTUSER By specifying any of these types, you can provide access to your local computer's location for the Microsoft Windows keystore. Thereby providing the keystore access to certificates that are stored on your local system. See JDK-6782021 (JDK Bug System). Added note for LoginModule implementation The OpenJDK 9 release changed the Set implementation, which holds principals and credentials, so that the implementation can reject null values. Any attempts to call add(null) , contains(null) , or remove(null) would throw a NullPointerException message. The OpenJDK 9 release did not update the logout() method in the LoginModule implementation to check for null values. These values could occur because of a failed login attempt, which can cause a logout() call to throw a NullPointerException message. The OpenJDK 11.0.18 release updates the LoginModule implementations to check for null values. Additionally, the release adds an implementation note to the specification that states the change also applies to third-party modules. The note advises developers of third-party modules to verify that a logout() method does not throw a NullPointerException message. See JDK-8015081 (JDK Bug System). See JDK-8282730 (JDK Bug System). Revised on 2024-05-09 16:48:08 UTC
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_eclipse_temurin_11.0.18/openjdk-temurin-features-11-0-18_openjdk
7.74. hplip
7.74. hplip 7.74.1. RHBA-2015:1282 - hplip bug fix and enhancement update Updated hplip packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The hplip packages contain the Hewlett-Packard Linux Imaging and Printing Project (HPLIP), which provides drivers for Hewlett-Packard printers and multi-function peripherals. Note The hplip packages have been upgraded to upstream version 3.14.6, which provides a number of bug fixes and enhancements over the version, including hardware enablement and new functionality, such as the Service Location Protocol (SLP) discovery feature. (BZ# 1077121 ) Bug Fixes BZ# 682814 Previously, HPLIP did not correctly handle CUPS denying a requested operation, such enabling or disabling a printer. As a consequence, operating HP Device Manager as a non-root user did not prompt for the root password when the root password was required for an operation. With this update, the password callback is correctly implemented, and operating HP Device Manager as non-root user now always prompts for the root password when required. BZ# 876066 Prior to this update, the use of an uninitialized value could produce incorrect output from the hpcups driver. The underlying source code has been modified to initialize the value before it is used, and the described unexpected behavior is therefore prevented. Users of hplip are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-hplip
23.2. Userspace Access
23.2. Userspace Access Always take care to use properly aligned and sized I/O. This is especially important for Direct I/O access. Direct I/O should be aligned on a logical_block_size boundary, and in multiples of the logical_block_size . With native 4K devices (i.e. logical_block_size is 4K) it is now critical that applications perform direct I/O in multiples of the device's logical_block_size . This means that applications will fail with native 4k devices that perform 512-byte aligned I/O rather than 4k-aligned I/O. To avoid this, an application should consult the I/O parameters of a device to ensure it is using the proper I/O alignment and size. As mentioned earlier, I/O parameters are exposed through the both sysfs and block device ioctl interfaces. For more information, see man libblkid . This man page is provided by the libblkid-devel package. sysfs Interface /sys/block/ disk /alignment_offset or /sys/block/ disk / partition /alignment_offset Note The file location depends on whether the disk is a physical disk (be that a local disk, local RAID, or a multipath LUN) or a virtual disk. The first file location is applicable to physical disks while the second file location is applicable to virtual disks. The reason for this is because virtio-blk will always report an alignment value for the partition. Physical disks may or may not report an alignment value. /sys/block/ disk /queue/physical_block_size /sys/block/ disk /queue/logical_block_size /sys/block/ disk /queue/minimum_io_size /sys/block/ disk /queue/optimal_io_size The kernel will still export these sysfs attributes for "legacy" devices that do not provide I/O parameters information, for example: Example 23.1. sysfs Interface Block Device ioctls BLKALIGNOFF : alignment_offset BLKPBSZGET : physical_block_size BLKSSZGET : logical_block_size BLKIOMIN : minimum_io_size BLKIOOPT : optimal_io_size
[ "alignment_offset: 0 physical_block_size: 512 logical_block_size: 512 minimum_io_size: 512 optimal_io_size: 0" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/storage_administration_guide/iolimuserspace
Chapter 9. Build-time network policy tools
Chapter 9. Build-time network policy tools Build-time network policy tools let you automate the creation and validation of Kubernetes network policies in your development and operations workflows using the roxctl CLI. These tools work with a specified file directory containing your project's workload and network policy manifests and do not require RHACS authentication. Table 9.1. Network policy tools Command Description roxctl netpol generate Generates Kubernetes network policies by analyzing your project's YAML manifests in a specified directory. For more information, see Using the build-time network policy generator . roxctl netpol connectivity map Lists the allowed connections between workloads in your project directory by examining the workload and Kubernetes network policy manifests. You can generate the output in various text formats or in a graphical .dot format. For more information, see Connectivity mapping using the roxctl netpol connectivity map command . roxctl netpol connectivity diff Creates a list of variations in the allowed connections between two project versions. This is determined by the workload and Kubernetes network policy manifests in each version's directory. This feature shows the semantic differences which are not obvious when performing a source code (syntactic) diff . For more information, see Identifying the differences in allowed connections between project versions . 9.1. Using the build-time network policy generator The build-time network policy generator can automatically generate Kubernetes network policies based on application YAML manifests. You can use it to develop network policies as part of the continuous integration/continuous deployment (CI/CD) pipeline before deploying applications on your cluster. Red Hat developed this feature in partnership with the developers of the NP-Guard project . First, the build-time network policy generator analyzes Kubernetes manifests in a local folder, including service manifests, config maps, and workload manifests such as Pod , Deployment , ReplicaSet , Job , DaemonSet , and StatefulSet . Then, it discovers the required connectivity and creates the Kubernetes network policies to achieve pod isolation. These policies allow no more and no less than the needed ingress and egress traffic. 9.1.1. Generating build-time network policies The build-time network policy generator is included in the roxctl CLI. For the build-time network policy generation feature, roxctl CLI does not need to communicate with RHACS Central so you can use it in any development environment. Prerequisites The build-time network policy generator recursively scans the directory you specify when you run the command. Therefore, before you run the command, you must already have service manifests, config maps, and workload manifests such as Pod , Deployment , ReplicaSet , Job , DaemonSet , and StatefulSet as YAML files in the specified directory. Verify that you can apply these YAML files as-is using the kubectl apply -f command. The build-time network policy generator does not work with files that use Helm style templating. Verify that the service network addresses are not hard-coded. Every workload that needs to connect to a service must specify the service network address as a variable. You can specify this variable by using the workload's resource environment variable or in a config map. Example 1: using an environment variable Example 2: using a config map Example 3: using a config map Service network addresses must match the following official regular expression pattern: 1 In this pattern, <svc> is the service name. <ns> is the namespace where you defined the service. <portNum> is the exposed service port number. Following are some examples that match the pattern: wordpress-mysql:3306 redis-follower.redis.svc.cluster.local:6379 redis-leader.redis http://rating-service. Procedure Verify that the build-time network policy generation feature is available by running the help command: USD roxctl netpol generate -h Generate the policies by using the netpol generate command: USD roxctl netpol generate <folder_path> [flags] 1 1 Specify the path to the folder, which can include sub-folders that contain YAML resources for analysis. The command scans the entire sub-folder tree. Optionally, you can also specify parameters to modify the behavior of the command. For more information about optional parameters, see roxctl netpol generate command options . steps After generating the policies, you must inspect them for completeness and accuracy, in case any relevant network address was not specified as expected in the YAML files. Most importantly, verify that required connections are not blocked by the isolating policies. To help with this inspection you can use the roxctl netpol connectivity map tool. Note Applying network policies to the cluster as part of the workload deployment using automation saves time and ensures accuracy. You can follow a GitOps approach by submitting the generated policies using pull requests, providing the team an opportunity to review the policies before deploying them as part of the pipeline. 9.1.2. roxctl netpol generate command options The roxctl netpol generate command supports the following options: Option Description -h, --help View the help text for the netpol command. -d, --output-dir <dir> Save the generated policies into a target folder. One file per policy. -f, --output-file <filename> Save and merge the generated policies into a single YAML file. --fail Fail on the first encountered error. The default value is false . --remove Remove the output path if it already exist. --strict Treat warnings as errors. The default value is false . 9.2. Connectivity mapping using the roxctl netpol connectivity map command Connectivity mapping provides details on the allowed connections between different workloads based on network policies defined in Kubernetes manifests. You can visualize and understand how different workloads in your Kubernetes environment are allowed to communicate with each other according to the network policies you set up. To retrieve connectivity mapping information, the roxctl netpol connectivity map command requires a directory path that contains Kubernetes workloads and network policy manifests. The output provides details about connectivity details within the Kubernetes resources analyzed. 9.2.1. Retrieving connectivity mapping information from a Kubernetes manifest directory Procedure Run the following command to retrieve the connectivity mapping information: USD roxctl netpol connectivity map <folder_path> [flags] 1 1 Specify the path to the folder, which can include sub-folders that contain YAML resources and network policies for analysis, for example, netpol-analysis-example-minimal/ . The command scans the entire sub-folder tree. Optionally, you can also specify parameters to modify the behavior of the command. For more information about optional parameters, see roxctl netpol connectivity map command options . Example 9.1. Example output src dst conn 0.0.0.0-255.255.255.255 default/frontend[Deployment] TCP 8080 default/frontend[Deployment] 0.0.0.0-255.255.255.255 UDP 53 default/frontend[Deployment] default/backend[Deployment] TCP 9090 The output shows you a table with a list of allowed connectivity lines. Each connectivity line consists of three parts: source ( src ), destination ( dst ), and allowed connectivity attributes ( conn ). You can interpret src as the source endpoint, dst as the destination endpoint, and conn as the allowable connectivity attributes. An endpoint has the format namespace/name[Kind] , for example, default/backend[Deployment] . 9.2.2. Connectivity map output formats and visualizations You can use various output formats, including txt , md , csv , json , and dot . The dot format is ideal for visualizing the output as a connectivity graph. It can be viewed using graph visualization software such as Graphviz tool , and extensions to VSCode . You can convert the dot output to formats such as svg , jpeg , or png using Graphviz, whether it is installed locally or through an online viewer. 9.2.3. Generating svg graphs from the dot output using Graphviz Follow these steps to create a graph in svg format from the dot output. Prerequisites Graphviz is installed on your local system. Procedure Run the following command to create the graph in svg format: USD dot -Tsvg connlist_output.dot > connlist_output_graph.svg The following are examples of the dot output and the resulting graph generated by Graphviz: Example 1: dot output Example 2: Graph generated by Graphviz 9.2.4. roxctl netpol connectivity map command options The roxctl netpol connectivity map command supports the following options: Option Description --fail Fail on the first encountered error. The default value is false . --focus-workload string Focus on connections of a specified workload name in the output. -h , --help View the help text for the roxctl netpol connectivity map command. -f , --output-file string Save the connections list output into a specific file. -o , --output-format string Configure the output format. The supported formats are txt , json , md , dot , and csv . The default value is txt . --remove Remove the output path if it already exists. The default value is false . --save-to-file Save the connections list output into a default file. The default value is false . --strict Treat warnings as errors. The default value is false . 9.3. Identifying the differences in allowed connections between project versions This command helps you understand the differences in allowed connections between two project versions. It analyses the workload and Kubernetes network policy manifests located in each version's directory and creates a representation of the differences in text format. You can view connectivity difference reports in a variety of output formats, including text , md , dot , and csv . 9.3.1. Generating connectivity difference reports with the roxctl netpol connectivity diff command To produce a connectivity difference report, the roxctl netpol connectivity diff command requires two folders, dir1 and dir2 , each containing Kubernetes manifests, including network policies. Procedure Run the following command to determine the connectivity differences between the Kubernetes manifests in the specified directories: USD roxctl netpol connectivity diff --dir1= <folder_path_1> --dir2= <folder_path_2> [flags] 1 1 Specify the path to the folders, which can include sub-folders that contain YAML resources and network policies for analysis. The command scans the entire sub-folder trees for both the directories. For example, <folder_path_1> is netpol-analysis-example-minimal/ and <folder_path_2> is netpol-diff-example-minimal/ . Optionally, you can also specify parameters to modify the behavior of the command. For more information about optional parameters, see roxctl netpol connectivity diff command options . Note The command considers all YAML files that you can accept using kubectl apply -f , and then these become valid inputs for your roxctl netpol connectivity diff command. Example 9.2. Example output diff-type source destination dir 1 dir 2 workloads-diff-info changed default/frontend[Deployment] default/backend[Deployment] TCP 9090 TCP 9090,UDP 53 added 0.0.0.0-255.255.255.255 default/backend[Deployment] No Connections TCP 9090 The semantic difference report gives you an overview of the connections that were changed, added, or removed in dir2 compared to the connections allowed in dir1 . When you review the output, each line represents one allowed connection that was added, removed, or changed in dir2 compared to dir1 . The following are example outputs generated by the roxctl netpol connectivity diff command in various formats: Example 1: text format Example 2: md format Example 3: svg graph generated from dot format Example 4: csv format If applicable, the workloads-diff-info provides additional details about added or removed workloads related to the added or removed connection. For example, if a connection from workload A to workload B is removed because workload B was deleted, the workloads-diff-info indicates that workload B was removed. However, if such a connection was removed only because of network policy changes and neither workload A nor B was deleted, the workloads-diff-info is empty. 9.3.2. roxctl netpol connectivity diff command options The roxctl netpol connectivity diff command supports the following options: Option Description --dir1 string First directory path of the input resources. This is a mandatory option. --dir2 string Second directory path of the input resources to be compared with the first directory path. This is a mandatory option. --fail Fail on the first encountered error. The default value is false . -h , --help View the help text for the roxctl netpol connectivity diff command. -f , --output-file string Save the connections difference output into a specific file. -o , --output-format string Configure the output format. The supported formats are txt , md , dot , and csv . The default value is txt . --remove Remove the output path if it already exists. The default value is false . --save-to-file Save the connections difference output into default a file. The default value is false . --strict Treat warnings as errors. The default value is false . 9.3.3. Distinguishing between syntactic and semantic difference outputs In the following example, dir1 is netpol-analysis-example-minimal/ , and dir2 is netpol-diff-example-minimal/ . The difference between the directories is a small change in the network policy backend-netpol . Example policy from dir1 : apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: creationTimestamp: null name: backend-netpol spec: ingress: - from: - podSelector: matchLabels: app: frontend ports: - port: 9090 protocol: TCP podSelector: matchLabels: app: backendservice policyTypes: - Ingress - Egress status: {} The change in dir2 is an added - before the ports attribute, which produces a difference output. 9.3.3.1. Syntactic difference output Procedure Run the following command to compare the contents of the netpols.yaml files in the two specified directories: Example output 12c12 < - ports: --- > ports: 9.3.3.2. Semantic difference output Procedure Run the following command to analyze the connectivity differences between the Kubernetes manifests and network policies in the two specified directories: USD roxctl netpol connectivity diff --dir1=roxctl/netpol/connectivity/diff/testdata/netpol-analysis-example-minimal/ --dir2=roxctl/netpol/connectivity/diff/testdata/netpol-diff-example-minimal Example output Connectivity diff: diff-type: changed, source: default/frontend[Deployment], destination: default/backend[Deployment], dir1: TCP 9090, dir2: TCP 9090,UDP 53 diff-type: added, source: 0.0.0.0-255.255.255.255, destination: default/backend[Deployment], dir1: No Connections, dir2: TCP 9090
[ "(http(s)?://)?<svc>(.<ns>(.svc.cluster.local)?)?(:<portNum>)? 1", "roxctl netpol generate -h", "roxctl netpol generate <folder_path> [flags] 1", "roxctl netpol connectivity map <folder_path> [flags] 1", "dot -Tsvg connlist_output.dot > connlist_output_graph.svg", "roxctl netpol connectivity diff --dir1= <folder_path_1> --dir2= <folder_path_2> [flags] 1", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: creationTimestamp: null name: backend-netpol spec: ingress: - from: - podSelector: matchLabels: app: frontend ports: - port: 9090 protocol: TCP podSelector: matchLabels: app: backendservice policyTypes: - Ingress - Egress status: {}", "diff netpol-diff-example-minimal/netpols.yaml netpol-analysis-example-minimal/netpols.yaml", "12c12 < - ports: --- > ports:", "roxctl netpol connectivity diff --dir1=roxctl/netpol/connectivity/diff/testdata/netpol-analysis-example-minimal/ --dir2=roxctl/netpol/connectivity/diff/testdata/netpol-diff-example-minimal", "Connectivity diff: diff-type: changed, source: default/frontend[Deployment], destination: default/backend[Deployment], dir1: TCP 9090, dir2: TCP 9090,UDP 53 diff-type: added, source: 0.0.0.0-255.255.255.255, destination: default/backend[Deployment], dir1: No Connections, dir2: TCP 9090" ]
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.5/html/operating/build-time-network-policy-tools
Release notes for Eclipse Temurin 17.0.8
Release notes for Eclipse Temurin 17.0.8 Red Hat build of OpenJDK 17 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/release_notes_for_eclipse_temurin_17.0.8/index
6.3. Caller Identity Login Module
6.3. Caller Identity Login Module If a client needs to supply a simple text password, certificate, or a custom serialized object as a credential to the data source, administrators can configure the CallerIdentityLoginModule . Using this login module, users are able to supply to the data source the same credential used to log into the JBoss Data Virtualization security domain.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/security_guide/caller_identity_login_module
Searching entries and tuning searches
Searching entries and tuning searches Red Hat Directory Server 12 Finding directory entries and improving search performance Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/searching_entries_and_tuning_searches/index
Chapter 5. Configuring the web console in OpenShift Container Platform
Chapter 5. Configuring the web console in OpenShift Container Platform You can modify the OpenShift Container Platform web console to set a logout redirect URL or disable the quick start tutorials. 5.1. Prerequisites Deploy an OpenShift Container Platform cluster. 5.2. Configuring the web console You can configure the web console settings by editing the console.config.openshift.io resource. Edit the console.config.openshift.io resource: USD oc edit console.config.openshift.io cluster The following example displays the sample resource definition for the console: apiVersion: config.openshift.io/v1 kind: Console metadata: name: cluster spec: authentication: logoutRedirect: "" 1 status: consoleURL: "" 2 1 Specify the URL of the page to load when a user logs out of the web console. If you do not specify a value, the user returns to the login page for the web console. Specifying a logoutRedirect URL allows your users to perform single logout (SLO) through the identity provider to destroy their single sign-on session. 2 The web console URL. To update this to a custom value, see Customizing the web console URL . 5.3. Disabling quick starts in the web console You can use the Administrator perspective of the web console to disable one or more quick starts. Prerequisites You have cluster administrator permissions and are logged in to the web console. Procedure In the Administrator perspective, navigate to Administation Cluster Settings . On the Cluster Settings page, click the Configuration tab. On the Configuration page, click the Console configuration resource with the description operator.openshift.io . From the Action drop-down list, select Customize , which opens the Cluster configuration page. On the General tab, in the Quick starts section, you can select items in either the Enabled or Disabled list, and move them from one list to the other by using the arrow buttons. To enable or disable a single quick start, click the quick start, then use the single arrow buttons to move the quick start to the appropriate list. To enable or disable multiple quick starts at once, press Ctrl and click the quick starts you want to move. Then, use the single arrow buttons to move the quick starts to the appropriate list. To enable or disable all quick starts at once, click the double arrow buttons to move all of the quick starts to the appropriate list.
[ "oc edit console.config.openshift.io cluster", "apiVersion: config.openshift.io/v1 kind: Console metadata: name: cluster spec: authentication: logoutRedirect: \"\" 1 status: consoleURL: \"\" 2" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/web_console/configuring-web-console
9.2. Detecting Errors During Normal Processing
9.2. Detecting Errors During Normal Processing Protect server performance by detecting errors during the normal chaining operation between the database link and the remote server. The database link has two attributes - nsMaxResponseDelay and nsMaxTestResponseDelay - which work together to determine if the remote server is no longer responding. The first attribute, nsMaxResponseDelay , sets a maximum duration for an LDAP operation to complete. If the operation takes more than the amount of time specified in this attribute, the database link's server suspects that the remote server is no longer online. Once the nsMaxResponseDelay period has been met, the database link pings the remote server. During the ping, the database link issues another LDAP request, a simple search request for an object that does not exist in the remote server. The duration of the ping is set using the nsMaxTestResponseDelay . If the remote server does not respond before the nsMaxTestResponseDelay period has passed, then an error is returned, and the connection is flagged as down. All connections between the database link and remote server will be blocked for 30 seconds, protecting the server from a performance degradation. After 30 seconds, operation requests made by the database link to the remote server continue as normal. Both attributes are stored in the cn=config,cn=chaining database,cn=plugins,cn=config entry. The following table describes the attributes in more detail: Table 9.1. Database Link Processing Error Detection Parameters Attribute Name Description nsMaxResponseDelay Maximum amount of time it can take a remote server to respond to an LDAP operation request made by a database link before an error is suspected. This period is given in seconds. The default delay period is 60 seconds. Once this delay period has been met, the database link tests the connection with the remote server. nsMaxTestResponseDelay Duration of the test issued by the database link to check whether the remote server is responding. If a response from the remote server is not returned before this period has passed, the database link assumes the remote server is down, and the connection is not used for subsequent operations. This period is given in seconds. The default test response delay period is 15 seconds.
null
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/performance_tuning_guide/advanced_feature_tuning_database_link_performance-detecting_errors_during_normal_processing
Chapter 5. Migration
Chapter 5. Migration This chapter provides information on migrating to versions of components included in Red Hat Software Collections 3.3. 5.1. Migrating to MariaDB 10.3 The rh-mariadb103 Software Collection is available for Red Hat Enterprise Linux 7, which includes MariaDB 5.5 as the default MySQL implementation. The rh-mariadb103 Software Collection does not conflict with the mysql or mariadb packages from the core systems. Unless the *-syspaths packages are installed (see below), it is possible to install the rh-mariadb103 Software Collection together with the mysql or mariadb packages. It is also possible to run both versions at the same time, however, the port number and the socket in the my.cnf files need to be changed to prevent these specific resources from conflicting. Additionally, it is possible to install the rh-mariadb103 Software Collection while the rh-mariadb102 Collection is still installed and even running. The rh-mariadb103 Software Collection includes the rh-mariadb103-syspaths package, which installs packages that provide system-wide wrappers for binaries, scripts, manual pages, and other. After installing the rh-mariadb103*-syspaths packages, users are not required to use the scl enable command for correct functioning of the binaries and scripts provided by the rh-mariadb103* packages. Note that the *-syspaths packages conflict with the corresponding packages from the base Red Hat Enterprise Linux system and from the rh-mariadb102 and rh-mysql80 Software Collections. To find out more about syspaths , see the Red Hat Software Collections Packaging Guide . The recommended migration path from MariaDB 5.5 to MariaDB 10.3 is to upgrade to MariaDB 10.0 first, and then upgrade by one version successively. For details, see instructions in earlier Red Hat Software Collections Release Notes: Migrating to MariaDB 10.0 , Migrating to MariaDB 10.1 , and Migrating to MariaDB 10.2 . Note The rh-mariadb103 Software Collection supports neither mounting over NFS nor dynamical registering using the scl register command. 5.1.1. Notable Differences Between the rh-mariadb102 and rh-mariadb103 Software Collections The mariadb-bench subpackage has been removed. The default allowed level of the plug-in maturity has been changed to one level less than the server maturity. As a result, plug-ins with a lower maturity level that were previously working, will no longer load. For more information regarding MariaDB 10.3 , see the upstream documentation about changes and about upgrading . 5.1.2. Upgrading from the rh-mariadb102 to the rh-mariadb103 Software Collection Important Prior to upgrading, back up all your data, including any MariaDB databases. Stop the rh-mariadb102 database server if it is still running. Before stopping the server, set the innodb_fast_shutdown option to 0 , so that InnoDB performs a slow shutdown, including a full purge and insert buffer merge. Read more about this option in the upstream documentation . This operation can take a longer time than in case of a normal shutdown. mysql -uroot -p -e "SET GLOBAL innodb_fast_shutdown = 0" Stop the rh-mariadb102 server. systemctl stop rh-mariadb102-mariadb.service Install the rh-mariadb103 Software Collection, including the subpackage providing the mysql_upgrade utility. yum install rh-mariadb103-mariadb-server rh-mariadb103-mariadb-server-utils Note that it is possible to install the rh-mariadb103 Software Collection while the rh-mariadb102 Software Collection is still installed because these Collections do not conflict. Inspect configuration of rh-mariadb103 , which is stored in the /etc/opt/rh/rh-mariadb103/my.cnf file and the /etc/opt/rh/rh-mariadb103/my.cnf.d/ directory. Compare it with configuration of rh-mariadb102 stored in /etc/opt/rh/rh-mariadb102/my.cnf and /etc/opt/rh/rh-mariadb102/my.cnf.d/ and adjust it if necessary. All data of the rh-mariadb102 Software Collection is stored in the /var/opt/rh/rh-mariadb102/lib/mysql/ directory unless configured differently. Copy the whole content of this directory to /var/opt/rh/rh-mariadb103/lib/mysql/ . You can move the content but remember to back up your data before you continue to upgrade. Make sure the data are owned by the mysql user and SELinux context is correct. Start the rh-mariadb103 database server. systemctl start rh-mariadb103-mariadb.service Perform the data migration. Note that running the mysql_upgrade command is required due to upstream changes introduced in MDEV-14637 . scl enable rh-mariadb103 mysql_upgrade If the root user has a non-empty password defined (it should have a password defined), it is necessary to call the mysql_upgrade utility with the -p option and specify the password. scl enable rh-mariadb103 -- mysql_upgrade -p Note that when the rh-mariadb103*-syspaths packages are installed, the scl enable command is not required. However, the *-syspaths packages conflict with the corresponding packages from the base Red Hat Enterprise Linux system and from the rh-mariadb102 and rh-mysql80 Software Collections. 5.2. Migrating to MariaDB 10.2 Red Hat Enterprise Linux 6 contains MySQL 5.1 as the default MySQL implementation. Red Hat Enterprise Linux 7 includes MariaDB 5.5 as the default MySQL implementation. MariaDB is a community-developed drop-in replacement for MySQL . MariaDB 10.1 has been available as a Software Collection since Red Hat Software Collections 2.2; Red Hat Software Collections 3.3 is distributed with MariaDB 10.2 . The rh-mariadb102 Software Collection, available for both Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7, does not conflict with the mysql or mariadb packages from the core systems. Unless the *-syspaths packages are installed (see below), it is possible to install the rh-mariadb102 Software Collection together with the mysql or mariadb packages. It is also possible to run both versions at the same time, however, the port number and the socket in the my.cnf files need to be changed to prevent these specific resources from conflicting. Additionally, it is possible to install the rh-mariadb102 Software Collection while the rh-mariadb101 Collection is still installed and even running. The recommended migration path from MariaDB 5.5 to MariaDB 10.3 is to upgrade to MariaDB 10.0 first, and then upgrade by one version successively. For details, see instructions in earlier Red Hat Software Collections Release Notes: Migrating to MariaDB 10.0 and Migrating to MariaDB 10.1 . For more information about MariaDB 10.2 , see the upstream documentation about changes in version 10.2 and about upgrading . Note The rh-mariadb102 Software Collection supports neither mounting over NFS nor dynamical registering using the scl register command. 5.2.1. Notable Differences Between the rh-mariadb101 and rh-mariadb102 Software Collections Major changes in MariaDB 10.2 are described in the Red Hat Software Collections 3.0 Release Notes . Since MariaDB 10.2 , behavior of the SQL_MODE variable has been changed; see the upstream documentation for details. Multiple options have changed their default values or have been deprecated or removed. For details, see the Knowledgebase article Migrating from MariaDB 10.1 to the MariaDB 10.2 Software Collection . The rh-mariadb102 Software Collection includes the rh-mariadb102-syspaths package, which installs packages that provide system-wide wrappers for binaries, scripts, manual pages, and other. After installing the rh-mariadb102*-syspaths packages, users are not required to use the scl enable command for correct functioning of the binaries and scripts provided by the rh-mariadb102* packages. Note that the *-syspaths packages conflict with the corresponding packages from the base Red Hat Enterprise Linux system and from the rh-mysql80 Software Collection. To find out more about syspaths , see the Red Hat Software Collections Packaging Guide . 5.2.2. Upgrading from the rh-mariadb101 to the rh-mariadb102 Software Collection Important Prior to upgrading, back up all your data, including any MariaDB databases. Stop the rh-mariadb101 database server if it is still running. Before stopping the server, set the innodb_fast_shutdown option to 0 , so that InnoDB performs a slow shutdown, including a full purge and insert buffer merge. Read more about this option in the upstream documentation . This operation can take a longer time than in case of a normal shutdown. mysql -uroot -p -e "SET GLOBAL innodb_fast_shutdown = 0" Stop the rh-mariadb101 server. service rh-mariadb101-mariadb stop Install the rh-mariadb102 Software Collection. yum install rh-mariadb102-mariadb-server Note that it is possible to install the rh-mariadb102 Software Collection while the rh-mariadb101 Software Collection is still installed because these Collections do not conflict. Inspect configuration of rh-mariadb102 , which is stored in the /etc/opt/rh/rh-mariadb102/my.cnf file and the /etc/opt/rh/rh-mariadb102/my.cnf.d/ directory. Compare it with configuration of rh-mariadb101 stored in /etc/opt/rh/rh-mariadb101/my.cnf and /etc/opt/rh/rh-mariadb101/my.cnf.d/ and adjust it if necessary. All data of the rh-mariadb101 Software Collection is stored in the /var/opt/rh/rh-mariadb101/lib/mysql/ directory unless configured differently. Copy the whole content of this directory to /var/opt/rh/rh-mariadb102/lib/mysql/ . You can move the content but remember to back up your data before you continue to upgrade. Make sure the data are owned by the mysql user and SELinux context is correct. Start the rh-mariadb102 database server. service rh-mariadb102-mariadb start Perform the data migration. scl enable rh-mariadb102 mysql_upgrade If the root user has a non-empty password defined (it should have a password defined), it is necessary to call the mysql_upgrade utility with the -p option and specify the password. scl enable rh-mariadb102 -- mysql_upgrade -p Note that when the rh-mariadb102*-syspaths packages are installed, the scl enable command is not required. However, the *-syspaths packages conflict with the corresponding packages from the base Red Hat Enterprise Linux system and from the rh-mysql80 Software Collection. 5.3. Migrating to MySQL 8.0 The rh-mysql80 Software Collection is available for Red Hat Enterprise Linux 7, which includes MariaDB 5.5 as the default MySQL implementation. The rh-mysql80 Software Collection conflicts neither with the mysql or mariadb packages from the core systems nor with the rh-mysql* or rh-mariadb* Software Collections, unless the *-syspaths packages are installed (see below). It is also possible to run multiple versions at the same time; however, the port number and the socket in the my.cnf files need to be changed to prevent these specific resources from conflicting. Note that it is possible to upgrade to MySQL 8.0 only from MySQL 5.7 . If you need to upgrade from an earlier version, upgrade to MySQL 5.7 first. Instructions how to upgrade to MySQL 5.7 are available in Section 5.4, "Migrating to MySQL 5.7" . 5.3.1. Notable Differences Between MySQL 5.7 and MySQL 8.0 Differences Specific to the rh-mysql80 Software Collection The MySQL 8.0 server provided by the rh-mysql80 Software Collection is configured to use mysql_native_password as the default authentication plug-in because client tools and libraries in Red Hat Enterprise Linux 7 are incompatible with the caching_sha2_password method, which is used by default in the upstream MySQL 8.0 version. To change the default authentication plug-in to caching_sha2_password , edit the /etc/opt/rh/rh-mysql80/my.cnf.d/mysql-default-authentication-plugin.cnf file as follows: For more information about the caching_sha2_password authentication plug-in, see the upstream documentation . The rh-mysql80 Software Collection includes the rh-mysql80-syspaths package, which installs the rh-mysql80-mysql-config-syspaths , rh-mysql80-mysql-server-syspaths , and rh-mysql80-mysql-syspaths packages. These subpackages provide system-wide wrappers for binaries, scripts, manual pages, and other. After installing the rh-mysql80*-syspaths packages, users are not required to use the scl enable command for correct functioning of the binaries and scripts provided by the rh-mysql80* packages. Note that the *-syspaths packages conflict with the corresponding packages from the base Red Hat Enterprise Linux system and from the rh-mariadb102 and rh-mariadb103 Software Collections. To find out more about syspaths , see the Red Hat Software Collections Packaging Guide . General Changes in MySQL 8.0 Binary logging is enabled by default during the server startup. The log_bin system variable is now set to ON by default even if the --log-bin option has not been specified. To disable binary logging, specify the --skip-log-bin or --disable-log-bin option at startup. For a CREATE FUNCTION statement to be accepted, at least one of the DETERMINISTIC , NO SQL , or READS SQL DATA keywords must be specified explicitly, otherwise an error occurs. Certain features related to account management have been removed. Namely, using the GRANT statement to modify account properties other than privilege assignments, such as authentication, SSL, and resource-limit, is no longer possible. To establish the mentioned properties at account-creation time, use the CREATE USER statement. To modify these properties, use the ALTER USER statement. Certain SSL-related options have been removed on the client-side. Use the --ssl-mode=REQUIRED option instead of --ssl=1 or --enable-ssl . Use the --ssl-mode=DISABLED option instead of --ssl=0 , --skip-ssl , or --disable-ssl . Use the --ssl-mode=VERIFY_IDENTITY option instead of --ssl-verify-server-cert options. Note that these option remains unchanged on the server side. The default character set has been changed from latin1 to utf8mb4 . The utf8 character set is currently an alias for utf8mb3 but in the future, it will become a reference to utf8mb4 . To prevent ambiguity, specify utf8mb4 explicitly for character set references instead of utf8 . Setting user variables in statements other than SET has been deprecated. The log_syslog variable, which previously configured error logging to the system logs, has been removed. Certain incompatible changes to spatial data support have been introduced. The deprecated ASC or DESC qualifiers for GROUP BY clauses have been removed. To produce a given sort order, provide an ORDER BY clause. For detailed changes in MySQL 8.0 compared to earlier versions, see the upstream documentation: What Is New in MySQL 8.0 and Changes Affecting Upgrades to MySQL 8.0 . 5.3.2. Upgrading to the rh-mysql80 Software Collection Important Prior to upgrading, back-up all your data, including any MySQL databases. Install the rh-mysql80 Software Collection. yum install rh-mysql80-mysql-server Inspect the configuration of rh-mysql80 , which is stored in the /etc/opt/rh/rh-mysql80/my.cnf file and the /etc/opt/rh/rh-mysql80/my.cnf.d/ directory. Compare it with the configuration of rh-mysql57 stored in /etc/opt/rh/rh-mysql57/my.cnf and /etc/opt/rh/rh-mysql57/my.cnf.d/ and adjust it if necessary. Stop the rh-mysql57 database server, if it is still running. systemctl stop rh-mysql57-mysqld.service All data of the rh-mysql57 Software Collection is stored in the /var/opt/rh/rh-mysql57/lib/mysql/ directory. Copy the whole content of this directory to /var/opt/rh/rh-mysql80/lib/mysql/ . You can also move the content but remember to back up your data before you continue to upgrade. Start the rh-mysql80 database server. systemctl start rh-mysql80-mysqld.service Perform the data migration. scl enable rh-mysql80 mysql_upgrade If the root user has a non-empty password defined (it should have a password defined), it is necessary to call the mysql_upgrade utility with the -p option and specify the password. scl enable rh-mysql80 -- mysql_upgrade -p Note that when the rh-mysql80*-syspaths packages are installed, the scl enable command is not required. However, the *-syspaths packages conflict with the corresponding packages from the base Red Hat Enterprise Linux system and from the rh-mariadb102 and rh-mariadb103 Software Collections. 5.4. Migrating to MySQL 5.7 Red Hat Enterprise Linux 6 contains MySQL 5.1 as the default MySQL implementation. Red Hat Enterprise Linux 7 includes MariaDB 5.5 as the default MySQL implementation. In addition to these basic versions, MySQL 5.6 has been available as a Software Collection for both Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7 since Red Hat Software Collections 2.0. The rh-mysql57 Software Collection, available for both Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7, conflicts neither with the mysql or mariadb packages from the core systems nor with the rh-mysql56 Software Collection, so it is possible to install the rh-mysql57 Software Collection together with the mysql , mariadb , or rh-mysql56 packages. It is also possible to run multiple versions at the same time; however, the port number and the socket in the my.cnf files need to be changed to prevent these specific resources from conflicting. Note that it is possible to upgrade to MySQL 5.7 only from MySQL 5.6 . If you need to upgrade from an earlier version, upgrade to MySQL 5.6 first. Instructions how to upgrade to MySQL 5.6 are available in the Red Hat Software Collections 2.2 Release Notes . 5.4.1. Notable Differences Between MySQL 5.6 and MySQL 5.7 The mysql-bench subpackage is not included in the rh-mysql57 Software Collection. Since MySQL 5.7.7 , the default SQL mode includes NO_AUTO_CREATE_USER . Therefore it is necessary to create MySQL accounts using the CREATE USER statement because the GRANT statement no longer creates a user by default. See the upstream documentation for details. For detailed changes in MySQL 5.7 compared to earlier versions, see the upstream documentation: What Is New in MySQL 5.7 and Changes Affecting Upgrades to MySQL 5.7 . 5.4.2. Upgrading to the rh-mysql57 Software Collection Important Prior to upgrading, back-up all your data, including any MySQL databases. Install the rh-mysql57 Software Collection. yum install rh-mysql57-mysql-server Inspect the configuration of rh-mysql57 , which is stored in the /etc/opt/rh/rh-mysql57/my.cnf file and the /etc/opt/rh/rh-mysql57/my.cnf.d/ directory. Compare it with the configuration of rh-mysql56 stored in /etc/opt/rh/rh-mysql56/my.cnf and /etc/opt/rh/rh-mysql56/my.cnf.d/ and adjust it if necessary. Stop the rh-mysql56 database server, if it is still running. service rh-mysql56-mysqld stop All data of the rh-mysql56 Software Collection is stored in the /var/opt/rh/rh-mysql56/lib/mysql/ directory. Copy the whole content of this directory to /var/opt/rh/rh-mysql57/lib/mysql/ . You can also move the content but remember to back up your data before you continue to upgrade. Start the rh-mysql57 database server. service rh-mysql57-mysqld start Perform the data migration. scl enable rh-mysql57 mysql_upgrade If the root user has a non-empty password defined (it should have a password defined), it is necessary to call the mysql_upgrade utility with the -p option and specify the password. scl enable rh-mysql57 -- mysql_upgrade -p 5.5. Migrating to MongoDB 3.6 Red Hat Software Collections 3.3 is released with MongoDB 3.6 , provided by the rh-mongodb36 Software Collection and available only for Red Hat Enterprise Linux 7. The rh-mongodb36 Software Collection includes the rh-mongodb36-syspaths package, which installs packages that provide system-wide wrappers for binaries, scripts, manual pages, and other. After installing the rh-mongodb36*-syspaths packages, users are not required to use the scl enable command for correct functioning of the binaries and scripts provided by the rh-mongodb36* packages. To find out more about syspaths , see the Red Hat Software Collections Packaging Guide . 5.5.1. Notable Differences Between MongoDB 3.4 and MongoDB 3.6 General Changes The rh-mongodb36 Software Collection introduces the following significant general change: On Non-Uniform Access Memory (NUMA) hardware, it is possible to configure systemd services to be launched using the numactl command; see the upstream recommendation . To use MongoDB with the numactl command, you need to install the numactl RPM package and change the /etc/opt/rh/rh-mongodb36/sysconfig/mongod and /etc/opt/rh/rh-mongodb36/sysconfig/mongos configuration files accordingly. Compatibility Changes MongoDB 3.6 includes various minor changes that can affect compatibility with versions of MongoDB : MongoDB binaries now bind to localhost by default, so listening on different IP addresses needs to be explicitly enabled. Note that this is already the default behavior for systemd services distributed with MongoDB Software Collections. The MONGODB-CR authentication mechanism has been deprecated. For databases with users created by MongoDB versions earlier than 3.0, upgrade authentication schema to SCRAM . The HTTP interface and REST API have been removed Arbiters in replica sets have priority 0 Master-slave replication has been deprecated For detailed compatibility changes in MongoDB 3.6 , see the upstream release notes . Backwards Incompatible Features The following MongoDB 3.6 features are backwards incompatible and require the version to be set to 3.6 using the featureCompatibilityVersion command : UUID for collections USDjsonSchema document validation Change streams Chunk aware secondaries View definitions, document validators, and partial index filters that use version 3.6 query features Sessions and retryable writes Users and roles with authenticationRestrictions For details regarding backward incompatible changes in MongoDB 3.6 , see the upstream release notes . 5.5.2. Upgrading from the rh-mongodb34 to the rh-mongodb36 Software Collection Important Before migrating from the rh-mongodb34 to the rh-mongodb36 Software Collection, back up all your data, including any MongoDB databases, which are by default stored in the /var/opt/rh/rh-mongodb34/lib/mongodb/ directory. In addition, see the Compatibility Changes to ensure that your applications and deployments are compatible with MongoDB 3.6 . To upgrade to the rh-mongodb36 Software Collection, perform the following steps. To be able to upgrade, the rh-mongodb34 instance must have featureCompatibilityVersion set to 3.4 . Check featureCompatibilityVersion : ~]USD scl enable rh-mongodb34 'mongo --host localhost --port 27017 admin' --eval 'db.adminCommand({getParameter: 1, featureCompatibilityVersion: 1})' If the mongod server is configured with enabled access control, add the --username and --password options to the mongo command. Install the MongoDB servers and shells from the rh-mongodb36 Software Collections: ~]# yum install rh-mongodb36 Stop the MongoDB 3.4 server: ~]# systemctl stop rh-mongodb34-mongod.service Copy your data to the new location: ~]# cp -a /var/opt/rh/rh-mongodb34/lib/mongodb/* /var/opt/rh/rh-mongodb36/lib/mongodb/ Configure the rh-mongodb36-mongod daemon in the /etc/opt/rh/rh-mongodb36/mongod.conf file. Start the MongoDB 3.6 server: ~]# systemctl start rh-mongodb36-mongod.service Enable backwards incompatible features: ~]USD scl enable rh-mongodb36 'mongo --host localhost --port 27017 admin' --eval 'db.adminCommand( { setFeatureCompatibilityVersion: "3.6" } )' If the mongod server is configured with enabled access control, add the --username and --password options to the mongo command. Note After upgrading, it is recommended to run the deployment first without enabling the backwards incompatible features for a burn-in period of time, to minimize the likelihood of a downgrade. For detailed information about upgrading, see the upstream release notes . For information about upgrading a Replica Set, see the upstream MongoDB Manual . For information about upgrading a Sharded Cluster, see the upstream MongoDB Manual . 5.6. Migrating to MongoDB 3.4 The rh-mongodb34 Software Collection, available for both Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7, provides MongoDB 3.4 . 5.6.1. Notable Differences Between MongoDB 3.2 and MongoDB 3.4 General Changes The rh-mongodb34 Software Collection introduces various general changes. Major changes are listed in the Knowledgebase article Migrating from MongoDB 3.2 to MongoDB 3.4 . For detailed changes, see the upstream release notes . In addition, this Software Collection includes the rh-mongodb34-syspaths package, which installs packages that provide system-wide wrappers for binaries, scripts, manual pages, and other. After installing the rh-mongodb34*-syspaths packages, users are not required to use the scl enable command for correct functioning of the binaries and scripts provided by the rh-mongodb34* packages. To find out more about syspaths , see the Red Hat Software Collections Packaging Guide . Compatibility Changes MongoDB 3.4 includes various minor changes that can affect compatibility with versions of MongoDB . For details, see the Knowledgebase article Migrating from MongoDB 3.2 to MongoDB 3.4 and the upstream documentation . Notably, the following MongoDB 3.4 features are backwards incompatible and require that the version is set to 3.4 using the featureCompatibilityVersion command: Support for creating read-only views from existing collections or other views Index version v: 2 , which adds support for collation, decimal data and case-insensitive indexes Support for the decimal128 format with the new decimal data type For details regarding backward incompatible changes in MongoDB 3.4 , see the upstream release notes . 5.6.2. Upgrading from the rh-mongodb32 to the rh-mongodb34 Software Collection Note that once you have upgraded to MongoDB 3.4 and started using new features, cannot downgrade to version 3.2.7 or earlier. You can only downgrade to version 3.2.8 or later. Important Before migrating from the rh-mongodb32 to the rh-mongodb34 Software Collection, back up all your data, including any MongoDB databases, which are by default stored in the /var/opt/rh/rh-mongodb32/lib/mongodb/ directory. In addition, see the compatibility changes to ensure that your applications and deployments are compatible with MongoDB 3.4 . To upgrade to the rh-mongodb34 Software Collection, perform the following steps. Install the MongoDB servers and shells from the rh-mongodb34 Software Collections: ~]# yum install rh-mongodb34 Stop the MongoDB 3.2 server: ~]# systemctl stop rh-mongodb32-mongod.service Use the service rh-mongodb32-mongodb stop command on a Red Hat Enterprise Linux 6 system. Copy your data to the new location: ~]# cp -a /var/opt/rh/rh-mongodb32/lib/mongodb/* /var/opt/rh/rh-mongodb34/lib/mongodb/ Configure the rh-mongodb34-mongod daemon in the /etc/opt/rh/rh-mongodb34/mongod.conf file. Start the MongoDB 3.4 server: ~]# systemctl start rh-mongodb34-mongod.service On Red Hat Enterprise Linux 6, use the service rh-mongodb34-mongodb start command instead. Enable backwards-incompatible features: ~]USD scl enable rh-mongodb34 'mongo --host localhost --port 27017 admin' --eval 'db.adminCommand( { setFeatureCompatibilityVersion: "3.4" } )' If the mongod server is configured with enabled access control, add the --username and --password options to mongo command. Note that it is recommended to run the deployment after the upgrade without enabling these features first. For detailed information about upgrading, see the upstream release notes . For information about upgrading a Replica Set, see the upstream MongoDB Manual . For information about upgrading a Sharded Cluster, see the upstream MongoDB Manual . 5.7. Migrating to PostgreSQL 10 Red Hat Software Collections 3.3 is distributed with PostgreSQL 10 , available only for Red Hat Enterprise Linux 7. The rh-postgresql10 Software Collection can be safely installed on the same machine in parallel with the base Red Hat Enterprise Linux system version of PostgreSQL or any PostgreSQL Software Collection. It is also possible to run more than one version of PostgreSQL on a machine at the same time, but you need to use different ports or IP addresses and adjust SELinux policy. See Section 5.8, "Migrating to PostgreSQL 9.6" for instructions how to migrate to an earlier version or when using Red Hat Enterprise Linux 6. The rh-postgresql10 Software Collection includes the rh-postgresql10-syspaths package, which installs packages that provide system-wide wrappers for binaries, scripts, manual pages, and other. After installing the rh-postgreqsl10*-syspaths packages, users are not required to use the scl enable command for correct functioning of the binaries and scripts provided by the rh-postgreqsl10* packages. Note that the *-syspaths packages conflict with the corresponding packages from the base Red Hat Enterprise Linux system. To find out more about syspaths , see the Red Hat Software Collections Packaging Guide . Important Before migrating to PostgreSQL 10 , see the upstream compatibility notes . In case of upgrading the PostgreSQL database in a container, see the container-specific instructions . The following table provides an overview of different paths in a Red Hat Enterprise Linux 7 system version of PostgreSQL provided by the postgresql package, and in the rh-postgresql96 and rh-postgresql10 Software Colections. Table 5.1. Diferences in the PostgreSQL paths Content postgresql rh-postgresql96 rh-postgresql10 Executables /usr/bin/ /opt/rh/rh-postgresql96/root/usr/bin/ /opt/rh/rh-postgresql10/root/usr/bin/ Libraries /usr/lib64/ /opt/rh/rh-postgresql96/root/usr/lib64/ /opt/rh/rh-postgresql10/root/usr/lib64/ Documentation /usr/share/doc/postgresql/html/ /opt/rh/rh-postgresql96/root/usr/share/doc/postgresql/html/ /opt/rh/rh-postgresql10/root/usr/share/doc/postgresql/html/ PDF documentation /usr/share/doc/postgresql-docs/ /opt/rh/rh-postgresql96/root/usr/share/doc/postgresql-docs/ /opt/rh/rh-postgresql10/root/usr/share/doc/postgresql-docs/ Contrib documentation /usr/share/doc/postgresql-contrib/ /opt/rh/rh-postgresql96/root/usr/share/doc/postgresql-contrib/ /opt/rh/rh-postgresql10/root/usr/share/doc/postgresql-contrib/ Source not installed not installed not installed Data /var/lib/pgsql/data/ /var/opt/rh/rh-postgresql96/lib/pgsql/data/ /var/opt/rh/rh-postgresql10/lib/pgsql/data/ Backup area /var/lib/pgsql/backups/ /var/opt/rh/rh-postgresql96/lib/pgsql/backups/ /var/opt/rh/rh-postgresql10/lib/pgsql/backups/ Templates /usr/share/pgsql/ /opt/rh/rh-postgresql96/root/usr/share/pgsql/ /opt/rh/rh-postgresql10/root/usr/share/pgsql/ Procedural Languages /usr/lib64/pgsql/ /opt/rh/rh-postgresql96/root/usr/lib64/pgsql/ /opt/rh/rh-postgresql10/root/usr/lib64/pgsql/ Development Headers /usr/include/pgsql/ /opt/rh/rh-postgresql96/root/usr/include/pgsql/ /opt/rh/rh-postgresql10/root/usr/include/pgsql/ Other shared data /usr/share/pgsql/ /opt/rh/rh-postgresql96/root/usr/share/pgsql/ /opt/rh/rh-postgresql10/root/usr/share/pgsql/ Regression tests /usr/lib64/pgsql/test/regress/ (in the -test package) /opt/rh/rh-postgresql96/root/usr/lib64/pgsql/test/regress/ (in the -test package) /opt/rh/rh-postgresql10/root/usr/lib64/pgsql/test/regress/ (in the -test package) 5.7.1. Migrating from a Red Hat Enterprise Linux System Version of PostgreSQL to the PostgreSQL 10 Software Collection Red Hat Enterprise Linux 7 is distributed with PostgreSQL 9.2 . To migrate your data from a Red Hat Enterprise Linux system version of PostgreSQL to the rh-postgresql10 Software Collection, you can either perform a fast upgrade using the pg_upgrade tool (recommended), or dump the database data into a text file with SQL commands and import it in the new database. Note that the second method is usually significantly slower and may require manual fixes; see the PostgreSQL documentation for more information about this upgrade method. Important Before migrating your data from a Red Hat Enterprise Linux system version of PostgreSQL to PostgreSQL 10, make sure that you back up all your data, including the PostgreSQL database files, which are by default located in the /var/lib/pgsql/data/ directory. Procedure 5.1. Fast Upgrade Using the pg_upgrade Tool To perform a fast upgrade of your PostgreSQL server, complete the following steps: Stop the old PostgreSQL server to ensure that the data is not in an inconsistent state. To do so, type the following at a shell prompt as root : systemctl stop postgresql.service To verify that the server is not running, type: systemctl status postgresql.service Verify that the old directory /var/lib/pgsql/data/ exists: file /var/lib/pgsql/data/ and back up your data. Verify that the new data directory /var/opt/rh/rh-postgresql10/lib/pgsql/data/ does not exist: file /var/opt/rh/rh-postgresql10/lib/pgsql/data/ If you are running a fresh installation of PostgreSQL 10 , this directory should not be present in your system. If it is, back it up by running the following command as root : mv /var/opt/rh/rh-postgresql10/lib/pgsql/data{,-scl-backup} Upgrade the database data for the new server by running the following command as root : scl enable rh-postgresql10 -- postgresql-setup --upgrade Alternatively, you can use the /opt/rh/rh-postgresql10/root/usr/bin/postgresql-setup --upgrade command. Note that you can use the --upgrade-from option for upgrade from different versions of PostgreSQL . The list of possible upgrade scenarios is available using the --upgrade-ids option. It is recommended that you read the resulting /var/lib/pgsql/upgrade_rh-postgresql10-postgresql.log log file to find out if any problems occurred during the upgrade. Start the new server as root : systemctl start rh-postgresql10-postgresql.service It is also advised that you run the analyze_new_cluster.sh script as follows: su - postgres -c 'scl enable rh-postgresql10 ~/analyze_new_cluster.sh' Optionally, you can configure the PostgreSQL 10 server to start automatically at boot time. To disable the old system PostgreSQL server, type the following command as root : chkconfig postgresql off To enable the PostgreSQL 10 server, type as root : chkconfig rh-postgresql10-postgresql on If your configuration differs from the default one, make sure to update configuration files, especially the /var/opt/rh/rh-postgresql10/lib/pgsql/data/pg_hba.conf configuration file. Otherwise only the postgres user will be allowed to access the database. Procedure 5.2. Performing a Dump and Restore Upgrade To perform a dump and restore upgrade of your PostgreSQL server, complete the following steps: Ensure that the old PostgreSQL server is running by typing the following at a shell prompt as root : systemctl start postgresql.service Dump all data in the PostgreSQL database into a script file. As root , type: su - postgres -c 'pg_dumpall > ~/pgdump_file.sql' Stop the old server by running the following command as root : systemctl stop postgresql.service Initialize the data directory for the new server as root : scl enable rh-postgresql10-postgresql -- postgresql-setup --initdb Start the new server as root : systemctl start rh-postgresql10-postgresql.service Import data from the previously created SQL file: su - postgres -c 'scl enable rh-postgresql10 "psql -f ~/pgdump_file.sql postgres"' Optionally, you can configure the PostgreSQL 10 server to start automatically at boot time. To disable the old system PostgreSQL server, type the following command as root : chkconfig postgresql off To enable the PostgreSQL 10 server, type as root : chkconfig rh-postgresql10-postgresql on If your configuration differs from the default one, make sure to update configuration files, especially the /var/opt/rh/rh-postgresql10/lib/pgsql/data/pg_hba.conf configuration file. Otherwise only the postgres user will be allowed to access the database. 5.7.2. Migrating from the PostgreSQL 9.6 Software Collection to the PostgreSQL 10 Software Collection To migrate your data from the rh-postgresql96 Software Collection to the rh-postgresql10 Collection, you can either perform a fast upgrade using the pg_upgrade tool (recommended), or dump the database data into a text file with SQL commands and import it in the new database. Note that the second method is usually significantly slower and may require manual fixes; see the PostgreSQL documentation for more information about this upgrade method. Important Before migrating your data from PostgreSQL 9.6 to PostgreSQL 10 , make sure that you back up all your data, including the PostgreSQL database files, which are by default located in the /var/opt/rh/rh-postgresql96/lib/pgsql/data/ directory. Procedure 5.3. Fast Upgrade Using the pg_upgrade Tool To perform a fast upgrade of your PostgreSQL server, complete the following steps: Stop the old PostgreSQL server to ensure that the data is not in an inconsistent state. To do so, type the following at a shell prompt as root : systemctl stop rh-postgresql96-postgresql.service To verify that the server is not running, type: systemctl status rh-postgresql96-postgresql.service Verify that the old directory /var/opt/rh/rh-postgresql96/lib/pgsql/data/ exists: file /var/opt/rh/rh-postgresql96/lib/pgsql/data/ and back up your data. Verify that the new data directory /var/opt/rh/rh-postgresql10/lib/pgsql/data/ does not exist: file /var/opt/rh/rh-postgresql10/lib/pgsql/data/ If you are running a fresh installation of PostgreSQL 10 , this directory should not be present in your system. If it is, back it up by running the following command as root : mv /var/opt/rh/rh-postgresql10/lib/pgsql/data{,-scl-backup} Upgrade the database data for the new server by running the following command as root : scl enable rh-postgresql10 -- postgresql-setup --upgrade --upgrade-from=rh-postgresql96-postgresql Alternatively, you can use the /opt/rh/rh-postgresql10/root/usr/bin/postgresql-setup --upgrade --upgrade-from=rh-postgresql96-postgresql command. Note that you can use the --upgrade-from option for upgrading from different versions of PostgreSQL . The list of possible upgrade scenarios is available using the --upgrade-ids option. It is recommended that you read the resulting /var/lib/pgsql/upgrade_rh-postgresql10-postgresql.log log file to find out if any problems occurred during the upgrade. Start the new server as root : systemctl start rh-postgresql10-postgresql.service It is also advised that you run the analyze_new_cluster.sh script as follows: su - postgres -c 'scl enable rh-postgresql10 ~/analyze_new_cluster.sh' Optionally, you can configure the PostgreSQL 10 server to start automatically at boot time. To disable the old PostgreSQL 9.6 server, type the following command as root : chkconfig rh-postgresql96-postgreqsql off To enable the PostgreSQL 10 server, type as root : chkconfig rh-postgresql10-postgresql on If your configuration differs from the default one, make sure to update configuration files, especially the /var/opt/rh/rh-postgresql10/lib/pgsql/data/pg_hba.conf configuration file. Otherwise only the postgres user will be allowed to access the database. Procedure 5.4. Performing a Dump and Restore Upgrade To perform a dump and restore upgrade of your PostgreSQL server, complete the following steps: Ensure that the old PostgreSQL server is running by typing the following at a shell prompt as root : systemctl start rh-postgresql96-postgresql.service Dump all data in the PostgreSQL database into a script file. As root , type: su - postgres -c 'scl enable rh-postgresql96 "pg_dumpall > ~/pgdump_file.sql"' Stop the old server by running the following command as root : systemctl stop rh-postgresql96-postgresql.service Initialize the data directory for the new server as root : scl enable rh-postgresql10-postgresql -- postgresql-setup --initdb Start the new server as root : systemctl start rh-postgresql10-postgresql.service Import data from the previously created SQL file: su - postgres -c 'scl enable rh-postgresql10 "psql -f ~/pgdump_file.sql postgres"' Optionally, you can configure the PostgreSQL 10 server to start automatically at boot time. To disable the old PostgreSQL 9.6 server, type the following command as root : chkconfig rh-postgresql96-postgresql off To enable the PostgreSQL 10 server, type as root : chkconfig rh-postgresql10-postgresql on If your configuration differs from the default one, make sure to update configuration files, especially the /var/opt/rh/rh-postgresql10/lib/pgsql/data/pg_hba.conf configuration file. Otherwise only the postgres user will be allowed to access the database. 5.8. Migrating to PostgreSQL 9.6 PostgreSQL 9.6 is available for both Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7 and it can be safely installed on the same machine in parallel with PostgreSQL 8.4 from Red Hat Enterprise Linux 6, PostgreSQL 9.2 from Red Hat Enterprise Linux 7, or any version of PostgreSQL released in versions of Red Hat Software Collections. It is also possible to run more than one version of PostgreSQL on a machine at the same time, but you need to use different ports or IP addresses and adjust SELinux policy. Important In case of upgrading the PostgreSQL database in a container, see the container-specific instructions . Note that it is currently impossible to upgrade PostgreSQL from 9.5 to 9.6 in a container in an OpenShift environment that is configured with Gluster file volumes. 5.8.1. Notable Differences Between PostgreSQL 9.5 and PostgreSQL 9.6 The most notable changes between PostgreSQL 9.5 and PostgreSQL 9.6 are described in the upstream release notes . The rh-postgresql96 Software Collection includes the rh-postgresql96-syspaths package, which installs packages that provide system-wide wrappers for binaries, scripts, manual pages, and other. After installing the rh-postgreqsl96*-syspaths packages, users are not required to use the scl enable command for correct functioning of the binaries and scripts provided by the rh-postgreqsl96* packages. Note that the *-syspaths packages conflict with the corresponding packages from the base Red Hat Enterprise Linux system. To find out more about syspaths , see the Red Hat Software Collections Packaging Guide . The following table provides an overview of different paths in a Red Hat Enterprise Linux system version of PostgreSQL ( postgresql ) and in the postgresql92 , rh-postgresql95 , and rh-postgresql96 Software Collections. Note that the paths of PostgreSQL 8.4 distributed with Red Hat Enterprise Linux 6 and the system version of PostgreSQL 9.2 shipped with Red Hat Enterprise Linux 7 are the same; the paths for the rh-postgresql94 Software Collection are analogous to rh-postgresql95 . Table 5.2. Diferences in the PostgreSQL paths Content postgresql postgresql92 rh-postgresql95 rh-postgresql96 Executables /usr/bin/ /opt/rh/postgresql92/root/usr/bin/ /opt/rh/rh-postgresql95/root/usr/bin/ /opt/rh/rh-postgresql96/root/usr/bin/ Libraries /usr/lib64/ /opt/rh/postgresql92/root/usr/lib64/ /opt/rh/rh-postgresql95/root/usr/lib64/ /opt/rh/rh-postgresql96/root/usr/lib64/ Documentation /usr/share/doc/postgresql/html/ /opt/rh/postgresql92/root/usr/share/doc/postgresql/html/ /opt/rh/rh-postgresql95/root/usr/share/doc/postgresql/html/ /opt/rh/rh-postgresql96/root/usr/share/doc/postgresql/html/ PDF documentation /usr/share/doc/postgresql-docs/ /opt/rh/postgresql92/root/usr/share/doc/postgresql-docs/ /opt/rh/rh-postgresql95/root/usr/share/doc/postgresql-docs/ /opt/rh/rh-postgresql96/root/usr/share/doc/postgresql-docs/ Contrib documentation /usr/share/doc/postgresql-contrib/ /opt/rh/postgresql92/root/usr/share/doc/postgresql-contrib/ /opt/rh/rh-postgresql95/root/usr/share/doc/postgresql-contrib/ /opt/rh/rh-postgresql96/root/usr/share/doc/postgresql-contrib/ Source not installed not installed not installed not installed Data /var/lib/pgsql/data/ /opt/rh/postgresql92/root/var/lib/pgsql/data/ /var/opt/rh/rh-postgresql95/lib/pgsql/data/ /var/opt/rh/rh-postgresql96/lib/pgsql/data/ Backup area /var/lib/pgsql/backups/ /opt/rh/postgresql92/root/var/lib/pgsql/backups/ /var/opt/rh/rh-postgresql95/lib/pgsql/backups/ /var/opt/rh/rh-postgresql96/lib/pgsql/backups/ Templates /usr/share/pgsql/ /opt/rh/postgresql92/root/usr/share/pgsql/ /opt/rh/rh-postgresql95/root/usr/share/pgsql/ /opt/rh/rh-postgresql96/root/usr/share/pgsql/ Procedural Languages /usr/lib64/pgsql/ /opt/rh/postgresql92/root/usr/lib64/pgsql/ /opt/rh/rh-postgresql95/root/usr/lib64/pgsql/ /opt/rh/rh-postgresql96/root/usr/lib64/pgsql/ Development Headers /usr/include/pgsql/ /opt/rh/postgresql92/root/usr/include/pgsql/ /opt/rh/rh-postgresql95/root/usr/include/pgsql/ /opt/rh/rh-postgresql96/root/usr/include/pgsql/ Other shared data /usr/share/pgsql/ /opt/rh/postgresql92/root/usr/share/pgsql/ /opt/rh/rh-postgresql95/root/usr/share/pgsql/ /opt/rh/rh-postgresql96/root/usr/share/pgsql/ Regression tests /usr/lib64/pgsql/test/regress/ (in the -test package) /opt/rh/postgresql92/root/usr/lib64/pgsql/test/regress/ (in the -test package) /opt/rh/rh-postgresql95/root/usr/lib64/pgsql/test/regress/ (in the -test package) /opt/rh/rh-postgresql96/root/usr/lib64/pgsql/test/regress/ (in the -test package) For changes between PostgreSQL 8.4 and PostgreSQL 9.2 , refer to the Red Hat Software Collections 1.2 Release Notes . Notable changes between PostgreSQL 9.2 and PostgreSQL 9.4 are described in Red Hat Software Collections 2.0 Release Notes . For differences between PostgreSQL 9.4 and PostgreSQL 9.5 , refer to Red Hat Software Collections 2.2 Release Notes . 5.8.2. Migrating from a Red Hat Enterprise Linux System Version of PostgreSQL to the PostgreSQL 9.6 Software Collection Red Hat Enterprise Linux 6 includes PostgreSQL 8.4 , Red Hat Enterprise Linux 7 is distributed with PostgreSQL 9.2 . To migrate your data from a Red Hat Enterprise Linux system version of PostgreSQL to the rh-postgresql96 Software Collection, you can either perform a fast upgrade using the pg_upgrade tool (recommended), or dump the database data into a text file with SQL commands and import it in the new database. Note that the second method is usually significantly slower and may require manual fixes; see the PostgreSQL documentation for more information about this upgrade method. The following procedures are applicable for both Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7 system versions of PostgreSQL . Important Before migrating your data from a Red Hat Enterprise Linux system version of PostgreSQL to PostgreSQL 9.6, make sure that you back up all your data, including the PostgreSQL database files, which are by default located in the /var/lib/pgsql/data/ directory. Procedure 5.5. Fast Upgrade Using the pg_upgrade Tool To perform a fast upgrade of your PostgreSQL server, complete the following steps: Stop the old PostgreSQL server to ensure that the data is not in an inconsistent state. To do so, type the following at a shell prompt as root : service postgresql stop To verify that the server is not running, type: service postgresql status Verify that the old directory /var/lib/pgsql/data/ exists: file /var/lib/pgsql/data/ and back up your data. Verify that the new data directory /var/opt/rh/rh-postgresql96/lib/pgsql/data/ does not exist: file /var/opt/rh/rh-postgresql96/lib/pgsql/data/ If you are running a fresh installation of PostgreSQL 9.6 , this directory should not be present in your system. If it is, back it up by running the following command as root : mv /var/opt/rh/rh-postgresql96/lib/pgsql/data{,-scl-backup} Upgrade the database data for the new server by running the following command as root : scl enable rh-postgresql96 -- postgresql-setup --upgrade Alternatively, you can use the /opt/rh/rh-postgresql96/root/usr/bin/postgresql-setup --upgrade command. Note that you can use the --upgrade-from option for upgrade from different versions of PostgreSQL . The list of possible upgrade scenarios is available using the --upgrade-ids option. It is recommended that you read the resulting /var/lib/pgsql/upgrade_rh-postgresql96-postgresql.log log file to find out if any problems occurred during the upgrade. Start the new server as root : service rh-postgresql96-postgresql start It is also advised that you run the analyze_new_cluster.sh script as follows: su - postgres -c 'scl enable rh-postgresql96 ~/analyze_new_cluster.sh' Optionally, you can configure the PostgreSQL 9.6 server to start automatically at boot time. To disable the old system PostgreSQL server, type the following command as root : chkconfig postgresql off To enable the PostgreSQL 9.6 server, type as root : chkconfig rh-postgresql96-postgresql on If your configuration differs from the default one, make sure to update configuration files, especially the /var/opt/rh/rh-postgresql96/lib/pgsql/data/pg_hba.conf configuration file. Otherwise only the postgres user will be allowed to access the database. Procedure 5.6. Performing a Dump and Restore Upgrade To perform a dump and restore upgrade of your PostgreSQL server, complete the following steps: Ensure that the old PostgreSQL server is running by typing the following at a shell prompt as root : service postgresql start Dump all data in the PostgreSQL database into a script file. As root , type: su - postgres -c 'pg_dumpall > ~/pgdump_file.sql' Stop the old server by running the following command as root : service postgresql stop Initialize the data directory for the new server as root : scl enable rh-postgresql96-postgresql -- postgresql-setup --initdb Start the new server as root : service rh-postgresql96-postgresql start Import data from the previously created SQL file: su - postgres -c 'scl enable rh-postgresql96 "psql -f ~/pgdump_file.sql postgres"' Optionally, you can configure the PostgreSQL 9.6 server to start automatically at boot time. To disable the old system PostgreSQL server, type the following command as root : chkconfig postgresql off To enable the PostgreSQL 9.6 server, type as root : chkconfig rh-postgresql96-postgresql on If your configuration differs from the default one, make sure to update configuration files, especially the /var/opt/rh/rh-postgresql96/lib/pgsql/data/pg_hba.conf configuration file. Otherwise only the postgres user will be allowed to access the database. 5.8.3. Migrating from the PostgreSQL 9.5 Software Collection to the PostgreSQL 9.6 Software Collection To migrate your data from the rh-postgresql95 Software Collection to the rh-postgresql96 Collection, you can either perform a fast upgrade using the pg_upgrade tool (recommended), or dump the database data into a text file with SQL commands and import it in the new database. Note that the second method is usually significantly slower and may require manual fixes; see the PostgreSQL documentation for more information about this upgrade method. Important Before migrating your data from PostgreSQL 9.5 to PostgreSQL 9.6 , make sure that you back up all your data, including the PostgreSQL database files, which are by default located in the /var/opt/rh/rh-postgresql95/lib/pgsql/data/ directory. Procedure 5.7. Fast Upgrade Using the pg_upgrade Tool To perform a fast upgrade of your PostgreSQL server, complete the following steps: Stop the old PostgreSQL server to ensure that the data is not in an inconsistent state. To do so, type the following at a shell prompt as root : service rh-postgresql95-postgresql stop To verify that the server is not running, type: service rh-postgresql95-postgresql status Verify that the old directory /var/opt/rh/rh-postgresql95/lib/pgsql/data/ exists: file /var/opt/rh/rh-postgresql95/lib/pgsql/data/ and back up your data. Verify that the new data directory /var/opt/rh/rh-postgresql96/lib/pgsql/data/ does not exist: file /var/opt/rh/rh-postgresql96/lib/pgsql/data/ If you are running a fresh installation of PostgreSQL 9.6 , this directory should not be present in your system. If it is, back it up by running the following command as root : mv /var/opt/rh/rh-postgresql96/lib/pgsql/data{,-scl-backup} Upgrade the database data for the new server by running the following command as root : scl enable rh-postgresql96 -- postgresql-setup --upgrade --upgrade-from=rh-postgresql95-postgresql Alternatively, you can use the /opt/rh/rh-postgresql96/root/usr/bin/postgresql-setup --upgrade --upgrade-from=rh-postgresql95-postgresql command. Note that you can use the --upgrade-from option for upgrading from different versions of PostgreSQL . The list of possible upgrade scenarios is available using the --upgrade-ids option. It is recommended that you read the resulting /var/lib/pgsql/upgrade_rh-postgresql96-postgresql.log log file to find out if any problems occurred during the upgrade. Start the new server as root : service rh-postgresql96-postgresql start It is also advised that you run the analyze_new_cluster.sh script as follows: su - postgres -c 'scl enable rh-postgresql96 ~/analyze_new_cluster.sh' Optionally, you can configure the PostgreSQL 9.6 server to start automatically at boot time. To disable the old PostgreSQL 9.5 server, type the following command as root : chkconfig rh-postgresql95-postgreqsql off To enable the PostgreSQL 9.6 server, type as root : chkconfig rh-postgresql96-postgresql on If your configuration differs from the default one, make sure to update configuration files, especially the /var/opt/rh/rh-postgresql96/lib/pgsql/data/pg_hba.conf configuration file. Otherwise only the postgres user will be allowed to access the database. Procedure 5.8. Performing a Dump and Restore Upgrade To perform a dump and restore upgrade of your PostgreSQL server, complete the following steps: Ensure that the old PostgreSQL server is running by typing the following at a shell prompt as root : service rh-postgresql95-postgresql start Dump all data in the PostgreSQL database into a script file. As root , type: su - postgres -c 'scl enable rh-postgresql95 "pg_dumpall > ~/pgdump_file.sql"' Stop the old server by running the following command as root : service rh-postgresql95-postgresql stop Initialize the data directory for the new server as root : scl enable rh-postgresql96-postgresql -- postgresql-setup --initdb Start the new server as root : service rh-postgresql96-postgresql start Import data from the previously created SQL file: su - postgres -c 'scl enable rh-postgresql96 "psql -f ~/pgdump_file.sql postgres"' Optionally, you can configure the PostgreSQL 9.6 server to start automatically at boot time. To disable the old PostgreSQL 9.5 server, type the following command as root : chkconfig rh-postgresql95-postgresql off To enable the PostgreSQL 9.6 server, type as root : chkconfig rh-postgresql96-postgresql on If your configuration differs from the default one, make sure to update configuration files, especially the /var/opt/rh/rh-postgresql96/lib/pgsql/data/pg_hba.conf configuration file. Otherwise only the postgres user will be allowed to access the database. If you need to migrate from the postgresql92 Software Collection, refer to Red Hat Software Collections 2.0 Release Notes ; the procedure is the same, you just need to adjust the version of the new Collection. The same applies to migration from the rh-postgresql94 Software Collection, which is described in Red Hat Software Collections 2.2 Release Notes . 5.9. Migrating to nginx 1.14 The root directory for the rh-nginx114 Software Collection is located in /opt/rh/rh-nginx114/root/ . The error log is stored in /var/opt/rh/rh-nginx114/log/nginx by default. Configuration files are stored in the /etc/opt/rh/rh-nginx114/nginx/ directory. Configuration files in nginx 1.14 have the same syntax and largely the same format as nginx Software Collections. Configuration files (with a .conf extension) in the /etc/opt/rh/rh-nginx114/nginx/default.d/ directory are included in the default server block configuration for port 80 . Important Before upgrading from nginx 1.12 to nginx 1.14 , back up all your data, including web pages located in the /opt/rh/nginx112/root/ tree and configuration files located in the /etc/opt/rh/nginx112/nginx/ tree. If you have made any specific changes, such as changing configuration files or setting up web applications, in the /opt/rh/nginx112/root/ tree, replicate those changes in the new /opt/rh/rh-nginx114/root/ and /etc/opt/rh/rh-nginx114/nginx/ directories, too. You can use this procedure to upgrade directly from nginx 1.8 , nginx 1.10 , or nginx 1.12 to nginx 1.14 . Use the appropriate paths in this case. For the official nginx documentation, refer to http://nginx.org/en/docs/ . 5.10. Migrating to Redis 5 Redis 3.2 , provided by the rh-redis32 Software Collection, is mostly a strict subset of Redis 4.0 , which is mostly a strict subset of Redis 5.0 . Therefore, no major issues should occur when upgrading from version 3.2 to version 5.0. To upgrade a Redis Cluster to version 5.0, a mass restart of all the instances is needed. Compatibility Notes The format of RDB files has been changed. Redis 5 is able to read formats of all the earlier versions, but earlier versions are incapable of reading the Redis 5 format. Since version 4.0, the Redis Cluster bus protocol is no longer compatible with Redis 3.2 . For minor non-backward compatible changes, see the upstream release notes for version 4.0 and version 5.0 .
[ "[mysqld] default_authentication_plugin=caching_sha2_password" ]
https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/3.3_release_notes/chap-migration
Chapter 24. OpenShiftControllerManager [operator.openshift.io/v1]
Chapter 24. OpenShiftControllerManager [operator.openshift.io/v1] Description OpenShiftControllerManager provides information to configure an operator to manage openshift-controller-manager. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 24.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object status object 24.1.1. .spec Description Type object Property Type Description logLevel string logLevel is an intent based logging for an overall component. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for their operands. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". managementState string managementState indicates whether and how the operator should manage the component observedConfig `` observedConfig holds a sparse config that controller has observed from the cluster state. It exists in spec because it is an input to the level for the operator operatorLogLevel string operatorLogLevel is an intent based logging for the operator itself. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for themselves. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". unsupportedConfigOverrides `` unsupportedConfigOverrides overrides the final configuration that was computed by the operator. Red Hat does not support the use of this field. Misuse of this field could lead to unexpected behavior or conflict with other configuration options. Seek guidance from the Red Hat support before using this field. Use of this property blocks cluster upgrades, it must be removed before upgrading your cluster. 24.1.2. .status Description Type object Property Type Description conditions array conditions is a list of conditions and their status conditions[] object OperatorCondition is just the standard condition fields. generations array generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. generations[] object GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. latestAvailableRevision integer latestAvailableRevision is the deploymentID of the most recent deployment observedGeneration integer observedGeneration is the last generation change you've dealt with readyReplicas integer readyReplicas indicates how many replicas are ready and at the desired state version string version is the level this availability applies to 24.1.3. .status.conditions Description conditions is a list of conditions and their status Type array 24.1.4. .status.conditions[] Description OperatorCondition is just the standard condition fields. Type object Required lastTransitionTime status type Property Type Description lastTransitionTime string lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string reason string status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. 24.1.5. .status.generations Description generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. Type array 24.1.6. .status.generations[] Description GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. Type object Required group name namespace resource Property Type Description group string group is the group of the thing you're tracking hash string hash is an optional field set for resources without generation that are content sensitive like secrets and configmaps lastGeneration integer lastGeneration is the last generation of the workload controller involved name string name is the name of the thing you're tracking namespace string namespace is where the thing you're tracking is resource string resource is the resource type of the thing you're tracking 24.2. API endpoints The following API endpoints are available: /apis/operator.openshift.io/v1/openshiftcontrollermanagers DELETE : delete collection of OpenShiftControllerManager GET : list objects of kind OpenShiftControllerManager POST : create an OpenShiftControllerManager /apis/operator.openshift.io/v1/openshiftcontrollermanagers/{name} DELETE : delete an OpenShiftControllerManager GET : read the specified OpenShiftControllerManager PATCH : partially update the specified OpenShiftControllerManager PUT : replace the specified OpenShiftControllerManager /apis/operator.openshift.io/v1/openshiftcontrollermanagers/{name}/status GET : read status of the specified OpenShiftControllerManager PATCH : partially update status of the specified OpenShiftControllerManager PUT : replace status of the specified OpenShiftControllerManager 24.2.1. /apis/operator.openshift.io/v1/openshiftcontrollermanagers HTTP method DELETE Description delete collection of OpenShiftControllerManager Table 24.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind OpenShiftControllerManager Table 24.2. HTTP responses HTTP code Reponse body 200 - OK OpenShiftControllerManagerList schema 401 - Unauthorized Empty HTTP method POST Description create an OpenShiftControllerManager Table 24.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 24.4. Body parameters Parameter Type Description body OpenShiftControllerManager schema Table 24.5. HTTP responses HTTP code Reponse body 200 - OK OpenShiftControllerManager schema 201 - Created OpenShiftControllerManager schema 202 - Accepted OpenShiftControllerManager schema 401 - Unauthorized Empty 24.2.2. /apis/operator.openshift.io/v1/openshiftcontrollermanagers/{name} Table 24.6. Global path parameters Parameter Type Description name string name of the OpenShiftControllerManager HTTP method DELETE Description delete an OpenShiftControllerManager Table 24.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 24.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified OpenShiftControllerManager Table 24.9. HTTP responses HTTP code Reponse body 200 - OK OpenShiftControllerManager schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified OpenShiftControllerManager Table 24.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 24.11. HTTP responses HTTP code Reponse body 200 - OK OpenShiftControllerManager schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified OpenShiftControllerManager Table 24.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 24.13. Body parameters Parameter Type Description body OpenShiftControllerManager schema Table 24.14. HTTP responses HTTP code Reponse body 200 - OK OpenShiftControllerManager schema 201 - Created OpenShiftControllerManager schema 401 - Unauthorized Empty 24.2.3. /apis/operator.openshift.io/v1/openshiftcontrollermanagers/{name}/status Table 24.15. Global path parameters Parameter Type Description name string name of the OpenShiftControllerManager HTTP method GET Description read status of the specified OpenShiftControllerManager Table 24.16. HTTP responses HTTP code Reponse body 200 - OK OpenShiftControllerManager schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified OpenShiftControllerManager Table 24.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 24.18. HTTP responses HTTP code Reponse body 200 - OK OpenShiftControllerManager schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified OpenShiftControllerManager Table 24.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 24.20. Body parameters Parameter Type Description body OpenShiftControllerManager schema Table 24.21. HTTP responses HTTP code Reponse body 200 - OK OpenShiftControllerManager schema 201 - Created OpenShiftControllerManager schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/operator_apis/openshiftcontrollermanager-operator-openshift-io-v1
Installing, managing, and removing user-space components
Installing, managing, and removing user-space components Red Hat Enterprise Linux 8 Managing content in the BaseOS and AppStream repositories by using the YUM software management tool Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/installing_managing_and_removing_user-space_components/index
5.248. portreserve
5.248. portreserve 5.248.1. RHBA-2012:0447 - portreserve bug fix update An updated portreserve package that fixes two bugs is now available for Red Hat Enterprise Linux 6. The portreserve package helps services with well-known ports that lie in the portmap range. It prevents portmap from occupying a real service's port by occupying it itself, until the real service tells it to release the port, generally in the init script. Bug Fixes BZ# 614924 Prior to this update, the init script for the portreserve daemon did not always return the correct exit code. As a consequence, an incorrect error message was displayed. With this update, the init script is modified to return the correct exit codes, and also appropriate messages are now displayed. BZ# 712362 The portreserve package requires the "chkconfig" command because it is run in installation scriptlets. However, this was previously not reflected in the package metadata, and error messages could be displayed during installation. To prevent this issue, this update adds requirement tags for chkconfig. All users of portreserve are advised to upgrade to this updated package, which fixes these bugs.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/portreserve
8.2. Moving Resources Due to Failure
8.2. Moving Resources Due to Failure When you create a resource, you can configure the resource so that it will move to a new node after a defined number of failures by setting the migration-threshold option for that resource. Once the threshold has been reached, this node will no longer be allowed to run the failed resource until: The administrator manually resets the resource's failcount using the pcs resource failcount command. The resource's failure-timeout value is reached. The value of migration-threshold is set to INFINITY by default. INFINITY is defined internally as a very large but finite number. A value of 0 disables the migration-threshold feature. Note Setting a migration-threshold for a resource is not the same as configuring a resource for migration, in which the resource moves to another location without loss of state. The following example adds a migration threshold of 10 to the resource named dummy_resource , which indicates that the resource will move to a new node after 10 failures. You can add a migration threshold to the defaults for the whole cluster with the following command. To determine the resource's current failure status and limits, use the pcs resource failcount command. There are two exceptions to the migration threshold concept; they occur when a resource either fails to start or fails to stop. If the cluster property start-failure-is-fatal is set to true (which is the default), start failures cause the failcount to be set to INFINITY and thus always cause the resource to move immediately. For information on the start-failure-is-fatal option, see Table 12.1, "Cluster Properties" . Stop failures are slightly different and crucial. If a resource fails to stop and STONITH is enabled, then the cluster will fence the node in order to be able to start the resource elsewhere. If STONITH is not enabled, then the cluster has no way to continue and will not try to start the resource elsewhere, but will try to stop it again after the failure timeout.
[ "pcs resource meta dummy_resource migration-threshold=10", "pcs resource defaults migration-threshold=10" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/s1-failure_migration-haar
Appendix E. Securing Red Hat Virtualization
Appendix E. Securing Red Hat Virtualization This topic includes limited information about how to secure Red Hat Virtualization. This information will increase over time. This information is specific to Red Hat Virtualization; it and does not cover fundamental security practices related to: Disabling unnecessary services Authentication Authorization Accounting Penetration testing and hardening of non-RHV services Encryption of sensitive application data Prerequisites You should be proficient in your organization's security standards and practices. If possible, consult with your organization's Security Officer. Consult the Red Hat Enterprise Linux Security Guide before deploying RHEL hosts. E.1. DISA STIG for Red Hat Linux 7 The Defense Information Systems Agency (DISA) distributes Security Technical Implementation Guides (STIGs) for various platforms and operating systems. While installing Red Hat Virtualization Host (RHVH), the DISA STIG for Red Hat Linux 7 profile is one of the security policies available. Enabling this profile as your security policy during installation removes the need regenerate SSH keys, SSL certificates, or otherwise re-configure the host later in the deployment process. Important The DISA STIG security policy is the only security policy that Red Hat officially tests and certifies. DISA STIGs are "configuration standards for DOD IA and IA-enabled devices/systems. Since 1998, DISA has played a critical role in enhancing the security posture of DoD's security systems by providing the Security Technical Implementation Guides (STIGs). The STIGs contain technical guidance to 'lock down' information systems/software that might otherwise be vulnerable to a malicious computer attack." These STIGs are based on requirements put forth by the National Institute of Standards and Technology (NIST) Special Publication 800-53, a catalog of security controls for all U.S. federal information systems except those related to national security. To determine which various profiles overlap, Red Hat refers to the Cloud Security Alliance's Cloud Controls Matrix (CCM). This CCM specifies a comprehensive set of cloud-specific security controls, and maps each one to the requirements of leading standards, best practices, and regulations. To help you verify your security policy, Red Hat provides OpenSCAP tools and Security Content Automation Protocol (SCAP) profiles for various Red Hat platforms, including RHEL and RHV. Red Hat's OpenSCAP project provides open source tools for administrators and auditors to assess, measure, and enforce of SCAP baselines. NIST awarded SCAP 1.2 certification to OpenSCAP in 2014. NIST maintains the SCAP standard. SCAP-compliant profiles provide detailed low-level guidance on setting the security configuration of operating systems and applications. Red Hat publishes SCAP baselines for various products and platforms to two locations: The NIST National Checklist Program (NCP), the U.S. government repository of publicly available security checklists (or benchmarks). The Department of Defense (DoD) Cyber Exchange Additional resources NIST National Checklist Program Repository for Red Hat The DoD Cyber Exchange download page for Unix/Linux-related STIGs NIST Special Publication 800-53 Rev. 4 NIST Special Publication 800-53 Rev. 5 (DRAFT) The OpenSCAP Project Cloud Security Alliance: Cloud Controls Matrix E.2. Applying the DISA STIG for Red Hat Linux 7 Profile This topic shows you how to enable the DISA STIG for Red Hat Linux 7 security profile while installing the Red Hat Virtualization (RHV) Manager ("the Manager"), the Red Hat Virtualization Host (RHVH), and the Red Hat Enterprise Linux host. Enable DISA STIG for Red Hat Linux 7 for RHVH The following procedure applies to installing Red Hat Virtualization Host (RHVH) for two different purposes: Using RHVH as the host for the Manager virtual machine when you deploy the Manager as a self-hosted engine. Using RHVH as an ordinary host in an RHV cluster. If you use the Anaconda installer to install RHVH: On the Installation Summary screen, select Security Policy . On the Security Policy screen that opens, toggle the Apply security policy setting to On . Scroll down the list of profiles and select DISA STIG for Red Hat Linux 7 . Click the Select profile button. This action adds a green checkmark to the profile and adds packages to the list of Changes that were done or need to be done . Note These packages are already part of the RHVH image. RHVH ships as a single system image. Installation of packages required by any other selected security profiles which are not part of the RHVH image may not be possible. Please see the RHVH package manifest for a list of included packages. Click Done . On the Installation Summary screen, verify that the status of Security Policy is Everything okay . Later, when you log into RHVH, the command line displays the following information. Note If you deploy RHV as a Self-Hosted Engine using the command line , during the series of prompts after you enter ovirt-hosted-engine-setup , the command line will ask Do you want to apply a default OpenSCAP security profile? Enter Yes and follow the instructions to select the DISA STIG for Red Hat Linux 7 profile. Additional resources Configuring and Applying SCAP Policies During Installation
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/installing_red_hat_virtualization_as_a_self-hosted_engine_using_the_command_line/security
probe::vm.kmalloc_node
probe::vm.kmalloc_node Name probe::vm.kmalloc_node - Fires when kmalloc_node is requested Synopsis vm.kmalloc_node Values caller_function name of the caller function gfp_flag_name type of kmemory to allocate(in string format) call_site address of the function caling this kmemory function gfp_flags type of kmemory to allocate bytes_req requested Bytes name name of the probe point ptr pointer to the kmemory allocated bytes_alloc allocated Bytes
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-vm-kmalloc-node
Chapter 4. Creating and building an application using the CLI
Chapter 4. Creating and building an application using the CLI 4.1. Before you begin Review About the OpenShift CLI . You must be able to access a running instance of OpenShift Container Platform. If you do not have access, contact your cluster administrator. You must have the OpenShift CLI ( oc ) downloaded and installed . 4.2. Logging in to the CLI You can log in to the OpenShift CLI ( oc ) to access and manage your cluster. Prerequisites You must have access to an OpenShift Container Platform cluster. You must have installed the OpenShift CLI ( oc ). Procedure Log into OpenShift Container Platform from the CLI using your username and password, with an OAuth token, or with a web browser: With username and password: USD oc login -u=<username> -p=<password> --server=<your-openshift-server> --insecure-skip-tls-verify With an OAuth token: USD oc login <https://api.your-openshift-server.com> --token=<tokenID> With a web browser: USD oc login <cluster_url> --web You can now create a project or issue other commands for managing your cluster. Additional resources oc login oc logout 4.3. Creating a new project A project enables a community of users to organize and manage their content in isolation. Projects are OpenShift Container Platform extensions to Kubernetes namespaces. Projects have additional features that enable user self-provisioning. Users must receive access to projects from administrators. Cluster administrators can allow developers to create their own projects. In most cases, users automatically have access to their own projects. Each project has its own set of objects, policies, constraints, and service accounts. Prerequisites You must have access to an OpenShift Container Platform cluster. You must have installed the OpenShift CLI ( oc ). Procedure To create a new project, enter the following command: USD oc new-project user-getting-started --display-name="Getting Started with OpenShift" Example output Now using project "user-getting-started" on server "https://openshift.example.com:6443". Additional resources oc new-project 4.4. Granting view permissions OpenShift Container Platform automatically creates a few special service accounts in every project. The default service account takes responsibility for running the pods. OpenShift Container Platform uses and injects this service account into every pod that launches. The following procedure creates a RoleBinding object for the default ServiceAccount object. The service account communicates with the OpenShift Container Platform API to learn about pods, services, and resources within the project. Prerequisites You must have access to an OpenShift Container Platform cluster. You must have installed the OpenShift CLI ( oc ). You have a deployed image. You must have cluster-admin or project-admin privileges. Procedure To add the view role to the default service account in the user-getting-started project , enter the following command: USD oc adm policy add-role-to-user view -z default -n user-getting-started Additional resources Understanding authentication RBAC overview oc policy add-role-to-user 4.5. Deploying your first image The simplest way to deploy an application in OpenShift Container Platform is to run an existing container image. The following procedure deploys a front-end component of an application called national-parks-app . The web application displays an interactive map. The map displays the location of major national parks across the world. Prerequisites You must have access to an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Procedure To deploy an application, enter the following command: USD oc new-app quay.io/openshiftroadshow/parksmap:latest --name=parksmap -l 'app=national-parks-app,component=parksmap,role=frontend,app.kubernetes.io/part-of=national-parks-app' Example output --> Found container image 0c2f55f (12 months old) from quay.io for "quay.io/openshiftroadshow/parksmap:latest" * An image stream tag will be created as "parksmap:latest" that will track this image --> Creating resources with label app=national-parks-app,app.kubernetes.io/part-of=national-parks-app,component=parksmap,role=frontend ... imagestream.image.openshift.io "parksmap" created deployment.apps "parksmap" created service "parksmap" created --> Success Additional resources oc new-app 4.5.1. Creating a route External clients can access applications running on OpenShift Container Platform through the routing layer and the data object behind that is a route . The default OpenShift Container Platform router (HAProxy) uses the HTTP header of the incoming request to determine where to proxy the connection. Optionally, you can define security, such as TLS, for the route. Prerequisites You must have access to an OpenShift Container Platform cluster. You must have installed the OpenShift CLI ( oc ). You have a deployed image. You must have cluster-admin or project-admin privileges. Procedure To retrieve the created application service, enter the following command: USD oc get service Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE parksmap ClusterIP <your-cluster-IP> <123.456.789> 8080/TCP 8m29s To create a route, enter the following command: USD oc create route edge parksmap --service=parksmap Example output route.route.openshift.io/parksmap created To retrieve the created application route, enter the following command: USD oc get route Example output NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD parksmap parksmap-user-getting-started.apps.cluster.example.com parksmap 8080-tcp edge None Additional resources oc create route edge oc get 4.5.2. Examining the pod OpenShift Container Platform leverages the Kubernetes concept of a pod, which is one or more containers deployed together on one host, and the smallest compute unit that can be defined, deployed, and managed. Pods are the rough equivalent of a machine instance, physical or virtual, to a container. You can view the pods in your cluster and to determine the health of those pods and the cluster as a whole. Prerequisites You must have access to an OpenShift Container Platform cluster. You must have installed the OpenShift CLI ( oc ). You have a deployed image. Procedure To list all pods with node names, enter the following command: USD oc get pods Example output NAME READY STATUS RESTARTS AGE parksmap-5f9579955-6sng8 1/1 Running 0 77s To list all pod details, enter the following command: USD oc describe pods Example output Name: parksmap-848bd4954b-5pvcc Namespace: user-getting-started Priority: 0 Node: ci-ln-fr1rt92-72292-4fzf9-worker-a-g9g7c/10.0.128.4 Start Time: Sun, 13 Feb 2022 14:14:14 -0500 Labels: app=national-parks-app app.kubernetes.io/part-of=national-parks-app component=parksmap deployment=parksmap pod-template-hash=848bd4954b role=frontend Annotations: k8s.v1.cni.cncf.io/network-status: [{ "name": "openshift-sdn", "interface": "eth0", "ips": [ "10.131.0.14" ], "default": true, "dns": {} }] k8s.v1.cni.cncf.io/network-status: [{ "name": "openshift-sdn", "interface": "eth0", "ips": [ "10.131.0.14" ], "default": true, "dns": {} }] openshift.io/generated-by: OpenShiftNewApp openshift.io/scc: restricted Status: Running IP: 10.131.0.14 IPs: IP: 10.131.0.14 Controlled By: ReplicaSet/parksmap-848bd4954b Containers: parksmap: Container ID: cri-o://4b2625d4f61861e33cc95ad6d455915ea8ff6b75e17650538cc33c1e3e26aeb8 Image: quay.io/openshiftroadshow/parksmap@sha256:89d1e324846cb431df9039e1a7fd0ed2ba0c51aafbae73f2abd70a83d5fa173b Image ID: quay.io/openshiftroadshow/parksmap@sha256:89d1e324846cb431df9039e1a7fd0ed2ba0c51aafbae73f2abd70a83d5fa173b Port: 8080/TCP Host Port: 0/TCP State: Running Started: Sun, 13 Feb 2022 14:14:25 -0500 Ready: True Restart Count: 0 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6f844 (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-6f844: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true ConfigMapName: openshift-service-ca.crt ConfigMapOptional: <nil> QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 46s default-scheduler Successfully assigned user-getting-started/parksmap-848bd4954b-5pvcc to ci-ln-fr1rt92-72292-4fzf9-worker-a-g9g7c Normal AddedInterface 44s multus Add eth0 [10.131.0.14/23] from openshift-sdn Normal Pulling 44s kubelet Pulling image "quay.io/openshiftroadshow/parksmap@sha256:89d1e324846cb431df9039e1a7fd0ed2ba0c51aafbae73f2abd70a83d5fa173b" Normal Pulled 35s kubelet Successfully pulled image "quay.io/openshiftroadshow/parksmap@sha256:89d1e324846cb431df9039e1a7fd0ed2ba0c51aafbae73f2abd70a83d5fa173b" in 9.49243308s Normal Created 35s kubelet Created container parksmap Normal Started 35s kubelet Started container parksmap Additional resources oc describe oc get oc label Viewing pods Viewing pod logs 4.5.3. Scaling the application In Kubernetes, a Deployment object defines how an application deploys. In most cases, users use Pod , Service , ReplicaSets , and Deployment resources together. In most cases, OpenShift Container Platform creates the resources for you. When you deploy the national-parks-app image, a deployment resource is created. In this example, only one Pod is deployed. The following procedure scales the national-parks-image to use two instances. Prerequisites You must have access to an OpenShift Container Platform cluster. You must have installed the OpenShift CLI ( oc ). You have a deployed image. Procedure To scale your application from one pod instance to two pod instances, enter the following command: USD oc scale --current-replicas=1 --replicas=2 deployment/parksmap Example output deployment.apps/parksmap scaled Verification To ensure that your application scaled properly, enter the following command: USD oc get pods Example output NAME READY STATUS RESTARTS AGE parksmap-5f9579955-6sng8 1/1 Running 0 7m39s parksmap-5f9579955-8tgft 1/1 Running 0 24s To scale your application back down to one pod instance, enter the following command: USD oc scale --current-replicas=2 --replicas=1 deployment/parksmap Additional resources oc scale 4.6. Deploying a Python application The following procedure deploys a back-end service for the parksmap application. The Python application performs 2D geo-spatial queries against a MongoDB database to locate and return map coordinates of all national parks in the world. The deployed back-end service is nationalparks . Prerequisites You must have access to an OpenShift Container Platform cluster. You must have installed the OpenShift CLI ( oc ). You have a deployed image. Procedure To create a new Python application, enter the following command: USD oc new-app python~https://github.com/openshift-roadshow/nationalparks-py.git --name nationalparks -l 'app=national-parks-app,component=nationalparks,role=backend,app.kubernetes.io/part-of=national-parks-app,app.kubernetes.io/name=python' --allow-missing-images=true Example output --> Found image 0406f6c (13 days old) in image stream "openshift/python" under tag "3.9-ubi9" for "python" Python 3.9 ---------- Python 3.9 available as container is a base platform for building and running various Python 3.9 applications and frameworks. Python is an easy to learn, powerful programming language. It has efficient high-level data structures and a simple but effective approach to object-oriented programming. Python's elegant syntax and dynamic typing, together with its interpreted nature, make it an ideal language for scripting and rapid application development in many areas on most platforms. Tags: builder, python, python39, python-39, rh-python39 * A source build using source code from https://github.com/openshift-roadshow/nationalparks-py.git will be created * The resulting image will be pushed to image stream tag "nationalparks:latest" * Use 'oc start-build' to trigger a new build --> Creating resources with label app=national-parks-app,app.kubernetes.io/name=python,app.kubernetes.io/part-of=national-parks-app,component=nationalparks,role=backend ... imagestream.image.openshift.io "nationalparks" created buildconfig.build.openshift.io "nationalparks" created deployment.apps "nationalparks" created service "nationalparks" created --> Success To create a route to expose your application, nationalparks , enter the following command: USD oc create route edge nationalparks --service=nationalparks Example output route.route.openshift.io/parksmap created To retrieve the created application route, enter the following command: USD oc get route Example output NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD nationalparks nationalparks-user-getting-started.apps.cluster.example.com nationalparks 8080-tcp edge None parksmap parksmap-user-getting-started.apps.cluster.example.com parksmap 8080-tcp edge None Additional resources oc new-app 4.7. Connecting to a database Deploy and connect a MongoDB database where the national-parks-app application stores location information. Once you mark the national-parks-app application as a backend for the map visualization tool, parksmap deployment uses the OpenShift Container Platform discover mechanism to display the map automatically. Prerequisites You must have access to an OpenShift Container Platform cluster. You must have installed the OpenShift CLI ( oc ). You have a deployed image. Procedure To connect to a database, enter the following command: USD oc new-app quay.io/centos7/mongodb-36-centos7:master --name mongodb-nationalparks -e MONGODB_USER=mongodb -e MONGODB_PASSWORD=mongodb -e MONGODB_DATABASE=mongodb -e MONGODB_ADMIN_PASSWORD=mongodb -l 'app.kubernetes.io/part-of=national-parks-app,app.kubernetes.io/name=mongodb' Example output --> Found container image dc18f52 (3 years old) from quay.io for "quay.io/centos7/mongodb-36-centos7:master" MongoDB 3.6 ----------- MongoDB (from humongous) is a free and open-source cross-platform document-oriented database program. Classified as a NoSQL database program, MongoDB uses JSON-like documents with schemas. This container image contains programs to run mongod server. Tags: database, mongodb, rh-mongodb36 * An image stream tag will be created as "mongodb-nationalparks:master" that will track this image --> Creating resources with label app.kubernetes.io/name=mongodb,app.kubernetes.io/part-of=national-parks-app ... imagestream.image.openshift.io "mongodb-nationalparks" created deployment.apps "mongodb-nationalparks" created service "mongodb-nationalparks" created --> Success Additional resources oc new-project 4.7.1. Creating a secret The Secret object provides a mechanism to hold sensitive information such as passwords, OpenShift Container Platform client configuration files, private source repository credentials, and so on. Secrets decouple sensitive content from the pods. You can mount secrets into containers using a volume plugin or the system can use secrets to perform actions on behalf of a pod. The following procedure adds the secret nationalparks-mongodb-parameters and mounts it to the nationalparks workload. Prerequisites You must have access to an OpenShift Container Platform cluster. You must have installed the OpenShift CLI ( oc ). You have a deployed image. Procedure To create a secret, enter the following command: USD oc create secret generic nationalparks-mongodb-parameters --from-literal=DATABASE_SERVICE_NAME=mongodb-nationalparks --from-literal=MONGODB_USER=mongodb --from-literal=MONGODB_PASSWORD=mongodb --from-literal=MONGODB_DATABASE=mongodb --from-literal=MONGODB_ADMIN_PASSWORD=mongodb Example output secret/nationalparks-mongodb-parameters created To update the environment variable to attach the mongodb secret to the nationalpartks workload, enter the following command: USD oc set env --from=secret/nationalparks-mongodb-parameters deploy/nationalparks Example output deployment.apps/nationalparks updated To show the status of the nationalparks deployment, enter the following command: USD oc rollout status deployment nationalparks Example output deployment "nationalparks" successfully rolled out To show the status of the mongodb-nationalparks deployment, enter the following command: USD oc rollout status deployment mongodb-nationalparks Example output deployment "mongodb-nationalparks" successfully rolled out Additional resources oc create secret generic oc set env oc rollout status 4.7.2. Loading data and displaying the national parks map You deployed the parksmap and nationalparks applications and then deployed the mongodb-nationalparks database. However, no data has been loaded into the database. Prerequisites You must have access to an OpenShift Container Platform cluster. You must have installed the OpenShift CLI ( oc ). You have a deployed image. Procedure To load national parks data, enter the following command: USD oc exec USD(oc get pods -l component=nationalparks | tail -n 1 | awk '{print USD1;}') -- curl -s http://localhost:8080/ws/data/load Example output "Items inserted in database: 2893" To verify that your data is loaded properly, enter the following command: USD oc exec USD(oc get pods -l component=nationalparks | tail -n 1 | awk '{print USD1;}') -- curl -s http://localhost:8080/ws/data/all Example output (trimmed) , {"id": "Great Zimbabwe", "latitude": "-20.2674635", "longitude": "30.9337986", "name": "Great Zimbabwe"}] To add labels to the route, enter the following command: USD oc label route nationalparks type=parksmap-backend Example output route.route.openshift.io/nationalparks labeled To retrieve your routes to view your map, enter the following command: USD oc get routes Example output NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD nationalparks nationalparks-user-getting-started.apps.cluster.example.com nationalparks 8080-tcp edge None parksmap parksmap-user-getting-started.apps.cluster.example.com parksmap 8080-tcp edge None Copy and paste the HOST/PORT path you retrieved above into your web browser. Your browser should display a map of the national parks across the world. Figure 4.1. National parks across the world Additional resources oc exec oc label oc get
[ "oc login -u=<username> -p=<password> --server=<your-openshift-server> --insecure-skip-tls-verify", "oc login <https://api.your-openshift-server.com> --token=<tokenID>", "oc login <cluster_url> --web", "oc new-project user-getting-started --display-name=\"Getting Started with OpenShift\"", "Now using project \"user-getting-started\" on server \"https://openshift.example.com:6443\".", "oc adm policy add-role-to-user view -z default -n user-getting-started", "oc new-app quay.io/openshiftroadshow/parksmap:latest --name=parksmap -l 'app=national-parks-app,component=parksmap,role=frontend,app.kubernetes.io/part-of=national-parks-app'", "--> Found container image 0c2f55f (12 months old) from quay.io for \"quay.io/openshiftroadshow/parksmap:latest\" * An image stream tag will be created as \"parksmap:latest\" that will track this image --> Creating resources with label app=national-parks-app,app.kubernetes.io/part-of=national-parks-app,component=parksmap,role=frontend imagestream.image.openshift.io \"parksmap\" created deployment.apps \"parksmap\" created service \"parksmap\" created --> Success", "oc get service", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE parksmap ClusterIP <your-cluster-IP> <123.456.789> 8080/TCP 8m29s", "oc create route edge parksmap --service=parksmap", "route.route.openshift.io/parksmap created", "oc get route", "NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD parksmap parksmap-user-getting-started.apps.cluster.example.com parksmap 8080-tcp edge None", "oc get pods", "NAME READY STATUS RESTARTS AGE parksmap-5f9579955-6sng8 1/1 Running 0 77s", "oc describe pods", "Name: parksmap-848bd4954b-5pvcc Namespace: user-getting-started Priority: 0 Node: ci-ln-fr1rt92-72292-4fzf9-worker-a-g9g7c/10.0.128.4 Start Time: Sun, 13 Feb 2022 14:14:14 -0500 Labels: app=national-parks-app app.kubernetes.io/part-of=national-parks-app component=parksmap deployment=parksmap pod-template-hash=848bd4954b role=frontend Annotations: k8s.v1.cni.cncf.io/network-status: [{ \"name\": \"openshift-sdn\", \"interface\": \"eth0\", \"ips\": [ \"10.131.0.14\" ], \"default\": true, \"dns\": {} }] k8s.v1.cni.cncf.io/network-status: [{ \"name\": \"openshift-sdn\", \"interface\": \"eth0\", \"ips\": [ \"10.131.0.14\" ], \"default\": true, \"dns\": {} }] openshift.io/generated-by: OpenShiftNewApp openshift.io/scc: restricted Status: Running IP: 10.131.0.14 IPs: IP: 10.131.0.14 Controlled By: ReplicaSet/parksmap-848bd4954b Containers: parksmap: Container ID: cri-o://4b2625d4f61861e33cc95ad6d455915ea8ff6b75e17650538cc33c1e3e26aeb8 Image: quay.io/openshiftroadshow/parksmap@sha256:89d1e324846cb431df9039e1a7fd0ed2ba0c51aafbae73f2abd70a83d5fa173b Image ID: quay.io/openshiftroadshow/parksmap@sha256:89d1e324846cb431df9039e1a7fd0ed2ba0c51aafbae73f2abd70a83d5fa173b Port: 8080/TCP Host Port: 0/TCP State: Running Started: Sun, 13 Feb 2022 14:14:25 -0500 Ready: True Restart Count: 0 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6f844 (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-6f844: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true ConfigMapName: openshift-service-ca.crt ConfigMapOptional: <nil> QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 46s default-scheduler Successfully assigned user-getting-started/parksmap-848bd4954b-5pvcc to ci-ln-fr1rt92-72292-4fzf9-worker-a-g9g7c Normal AddedInterface 44s multus Add eth0 [10.131.0.14/23] from openshift-sdn Normal Pulling 44s kubelet Pulling image \"quay.io/openshiftroadshow/parksmap@sha256:89d1e324846cb431df9039e1a7fd0ed2ba0c51aafbae73f2abd70a83d5fa173b\" Normal Pulled 35s kubelet Successfully pulled image \"quay.io/openshiftroadshow/parksmap@sha256:89d1e324846cb431df9039e1a7fd0ed2ba0c51aafbae73f2abd70a83d5fa173b\" in 9.49243308s Normal Created 35s kubelet Created container parksmap Normal Started 35s kubelet Started container parksmap", "oc scale --current-replicas=1 --replicas=2 deployment/parksmap", "deployment.apps/parksmap scaled", "oc get pods", "NAME READY STATUS RESTARTS AGE parksmap-5f9579955-6sng8 1/1 Running 0 7m39s parksmap-5f9579955-8tgft 1/1 Running 0 24s", "oc scale --current-replicas=2 --replicas=1 deployment/parksmap", "oc new-app python~https://github.com/openshift-roadshow/nationalparks-py.git --name nationalparks -l 'app=national-parks-app,component=nationalparks,role=backend,app.kubernetes.io/part-of=national-parks-app,app.kubernetes.io/name=python' --allow-missing-images=true", "--> Found image 0406f6c (13 days old) in image stream \"openshift/python\" under tag \"3.9-ubi9\" for \"python\" Python 3.9 ---------- Python 3.9 available as container is a base platform for building and running various Python 3.9 applications and frameworks. Python is an easy to learn, powerful programming language. It has efficient high-level data structures and a simple but effective approach to object-oriented programming. Python's elegant syntax and dynamic typing, together with its interpreted nature, make it an ideal language for scripting and rapid application development in many areas on most platforms. Tags: builder, python, python39, python-39, rh-python39 * A source build using source code from https://github.com/openshift-roadshow/nationalparks-py.git will be created * The resulting image will be pushed to image stream tag \"nationalparks:latest\" * Use 'oc start-build' to trigger a new build --> Creating resources with label app=national-parks-app,app.kubernetes.io/name=python,app.kubernetes.io/part-of=national-parks-app,component=nationalparks,role=backend imagestream.image.openshift.io \"nationalparks\" created buildconfig.build.openshift.io \"nationalparks\" created deployment.apps \"nationalparks\" created service \"nationalparks\" created --> Success", "oc create route edge nationalparks --service=nationalparks", "route.route.openshift.io/parksmap created", "oc get route", "NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD nationalparks nationalparks-user-getting-started.apps.cluster.example.com nationalparks 8080-tcp edge None parksmap parksmap-user-getting-started.apps.cluster.example.com parksmap 8080-tcp edge None", "oc new-app quay.io/centos7/mongodb-36-centos7:master --name mongodb-nationalparks -e MONGODB_USER=mongodb -e MONGODB_PASSWORD=mongodb -e MONGODB_DATABASE=mongodb -e MONGODB_ADMIN_PASSWORD=mongodb -l 'app.kubernetes.io/part-of=national-parks-app,app.kubernetes.io/name=mongodb'", "--> Found container image dc18f52 (3 years old) from quay.io for \"quay.io/centos7/mongodb-36-centos7:master\" MongoDB 3.6 ----------- MongoDB (from humongous) is a free and open-source cross-platform document-oriented database program. Classified as a NoSQL database program, MongoDB uses JSON-like documents with schemas. This container image contains programs to run mongod server. Tags: database, mongodb, rh-mongodb36 * An image stream tag will be created as \"mongodb-nationalparks:master\" that will track this image --> Creating resources with label app.kubernetes.io/name=mongodb,app.kubernetes.io/part-of=national-parks-app imagestream.image.openshift.io \"mongodb-nationalparks\" created deployment.apps \"mongodb-nationalparks\" created service \"mongodb-nationalparks\" created --> Success", "oc create secret generic nationalparks-mongodb-parameters --from-literal=DATABASE_SERVICE_NAME=mongodb-nationalparks --from-literal=MONGODB_USER=mongodb --from-literal=MONGODB_PASSWORD=mongodb --from-literal=MONGODB_DATABASE=mongodb --from-literal=MONGODB_ADMIN_PASSWORD=mongodb", "secret/nationalparks-mongodb-parameters created", "oc set env --from=secret/nationalparks-mongodb-parameters deploy/nationalparks", "deployment.apps/nationalparks updated", "oc rollout status deployment nationalparks", "deployment \"nationalparks\" successfully rolled out", "oc rollout status deployment mongodb-nationalparks", "deployment \"mongodb-nationalparks\" successfully rolled out", "oc exec USD(oc get pods -l component=nationalparks | tail -n 1 | awk '{print USD1;}') -- curl -s http://localhost:8080/ws/data/load", "\"Items inserted in database: 2893\"", "oc exec USD(oc get pods -l component=nationalparks | tail -n 1 | awk '{print USD1;}') -- curl -s http://localhost:8080/ws/data/all", ", {\"id\": \"Great Zimbabwe\", \"latitude\": \"-20.2674635\", \"longitude\": \"30.9337986\", \"name\": \"Great Zimbabwe\"}]", "oc label route nationalparks type=parksmap-backend", "route.route.openshift.io/nationalparks labeled", "oc get routes", "NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD nationalparks nationalparks-user-getting-started.apps.cluster.example.com nationalparks 8080-tcp edge None parksmap parksmap-user-getting-started.apps.cluster.example.com parksmap 8080-tcp edge None" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/getting_started/openshift-cli
5.7. Customizing Hosts with Tags
5.7. Customizing Hosts with Tags You can use tags to store information about your hosts. You can then search for hosts based on tags. For more information on searches, see Chapter 3, Searches . Customizing hosts with tags Click Compute Hosts and select a host. Click More Actions ( ), then click Assign Tags . Select the check boxes of applicable tags. Click OK . You have added extra, searchable information about your host as tags.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/customizing_hosts_with_tags
Chapter 4. Importing content
Chapter 4. Importing content This chapter outlines how you can import different types of custom content to Satellite. For example, you can use the following chapters for information on specific types of custom content but the underlying procedures are the same: Chapter 12, Managing ISO images Chapter 14, Managing custom file type content 4.1. Products and repositories in Satellite Both Red Hat content and custom content in Satellite have similarities: The relationship between a product and its repositories is the same and the repositories still require synchronization. Custom products require a subscription for hosts to access, similar to subscriptions to Red Hat products. Satellite creates a subscription for each custom product you create. Red Hat content is already organized into products. For example, Red Hat Enterprise Linux Server is a product in Satellite. The repositories for that product consist of different versions, architectures, and add-ons. For Red Hat repositories, products are created automatically after enabling the repository. For more information, see Section 4.6, "Enabling Red Hat repositories" . Other content can be organized into custom products however you want. For example, you might create an EPEL (Extra Packages for Enterprise Linux) Product and add an "EPEL 7 x86_64" repository to it. For more information about creating and packaging RPMs, see the Red Hat Enterprise Linux 7 RPM Packaging Guide . 4.2. Best practices for products and repositories Use one content type per product and content view, for example, yum content only. Make file repositories available over HTTP. If you set Protected to true, you can only download content using a global debugging certificate. Automate the creation of multiple products and repositories by using a Hammer script or an Ansible Playbook . For Red Hat content, import your Red Hat manifest into Satellite. For more information, see Chapter 2, Managing Red Hat subscriptions . Avoid uploading content to repositories with an Upstream URL . Instead, create a repository to synchronize content and upload content to without setting an Upstream URL . If you upload content to a repository that already synchronizes another repository, the content might be overwritten, depending on the mirroring policy and content type. 4.3. Importing custom SSL certificates Before you synchronize custom content from an external source, you might need to import SSL certificates into your custom product. This might include client certs and keys or CA certificates for the upstream repositories you want to synchronize. If you require SSL certificates and keys to download packages, you can add them to Satellite. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Content > Content Credentials . In the Content Credentials window, click Create Content Credential . In the Name field, enter a name for your SSL certificate. From the Type list, select SSL Certificate . In the Content Credentials Content field, paste your SSL certificate, or click Browse to upload your SSL certificate. Click Save . CLI procedure Copy the SSL certificate to your Satellite Server: Or download the SSL certificate to your Satellite Server from an online source: Upload the SSL Certificate to Satellite: 4.4. Creating a custom product Create a custom product so that you can add repositories to the custom product. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Content > Products , click Create Product . In the Name field, enter a name for the product. Satellite automatically completes the Label field based on what you have entered for Name . Optional: From the GPG Key list, select the GPG key for the product. Optional: From the SSL CA Cert list, select the SSL CA certificate for the product. Optional: From the SSL Client Cert list, select the SSL client certificate for the product. Optional: From the SSL Client Key list, select the SSL client key for the product. Optional: From the Sync Plan list, select an existing sync plan or click Create Sync Plan and create a sync plan for your product requirements. In the Description field, enter a description of the product. Click Save . CLI procedure To create the product, enter the following command: 4.5. Adding custom RPM repositories Use this procedure to add custom RPM repositories in Satellite. To use the CLI instead of the Satellite web UI, see the CLI procedure . The Products window in the Satellite web UI also provides a Repo Discovery function that finds all repositories from a URL and you can select which ones to add to your custom product. For example, you can use the Repo Discovery to search https://download.postgresql.org/pub/repos/yum/16/redhat/ and list all repositories for different Red Hat Enterprise Linux versions and architectures. This helps users save time importing multiple repositories from a single source. Support for custom RPMs Red Hat does not support the upstream RPMs directly from third-party sites. These RPMs are used to demonstrate the synchronization process. For any issues with these RPMs, contact the third-party developers. Procedure In the Satellite web UI, navigate to Content > Products and select the product that you want to use, and then click New Repository . In the Name field, enter a name for the repository. Satellite automatically completes the Label field based on what you have entered for Name . Optional: In the Description field, enter a description for the repository. From the Type list, select yum as type of repository. Optional: From the Restrict to Architecture list, select an architecture. If you want to make the repository available to all hosts regardless of the architecture, ensure to select No restriction . Optional: From the Restrict to OS Version list, select the operating system version. If you want to make the repository available to all hosts regardless of the operating system version, ensure to select No restriction . Optional: In the Upstream URL field, enter the URL of the external repository to use as a source. Satellite supports three protocols: http:// , https:// , and file:// . If you are using a file:// repository, you have to place it under /var/lib/pulp/sync_imports/ directory. If you do not enter an upstream URL, you can manually upload packages. Optional: Check the Ignore SRPMs checkbox to exclude source RPM packages from being synchronized to Satellite. Optional: Check the Ignore treeinfo checkbox if you receive the error Treeinfo file should have INI format . All files related to Kickstart will be missing from the repository if treeinfo files are skipped. Select the Verify SSL checkbox if you want to verify that the upstream repository's SSL certificates are signed by a trusted CA. Optional: In the Upstream Username field, enter the user name for the upstream repository if required for authentication. Clear this field if the repository does not require authentication. Optional: In the Upstream Password field, enter the corresponding password for the upstream repository. Clear this field if the repository does not require authentication. Optional: In the Upstream Authentication Token field, provide the token of the upstream repository user for authentication. Leave this field empty if the repository does not require authentication. From the Download Policy list, select the type of synchronization Satellite Server performs. For more information, see Section 4.9, "Download policies overview" . From the Mirroring Policy list, select the type of content synchronization Satellite Server performs. For more information, see Section 4.12, "Mirroring policies overview" . Optional: In the Retain package versions field, enter the number of versions you want to retain per package. Optional: In the HTTP Proxy Policy field, select an HTTP proxy. From the Checksum list, select the checksum type for the repository. Optional: You can clear the Unprotected checkbox to require a subscription entitlement certificate for accessing this repository. By default, the repository is published through HTTP. Optional: From the GPG Key list, select the GPG key for the product. Optional: In the SSL CA Cert field, select the SSL CA Certificate for the repository. Optional: In the SSL Client cert field, select the SSL Client Certificate for the repository. Optional: In the SSL Client Key field, select the SSL Client Key for the repository. Click Save to create the repository. CLI procedure Enter the following command to create the repository: Continue to synchronize the repository . 4.6. Enabling Red Hat repositories If outside network access requires usage of an HTTP proxy, configure a default HTTP proxy for your server. For more information, see Adding a Default HTTP Proxy to Satellite . To select the repositories to synchronize, you must first identify the product that contains the repository, and then enable that repository based on the relevant release version and base architecture. For Red Hat Enterprise Linux 8 hosts To provision Red Hat Enterprise Linux 8 hosts, you require the Red Hat Enterprise Linux 8 for x86_64 - AppStream (RPMs) and Red Hat Enterprise Linux 8 for x86_64 - BaseOS (RPMs) repositories. For Red Hat Enterprise Linux 7 hosts To provision Red Hat Enterprise Linux 7 hosts, you require the Red Hat Enterprise Linux 7 Server (RPMs) repository. The difference between associating Red Hat Enterprise Linux operating system release version with either 7Server repositories or 7. X repositories is that 7Server repositories contain all the latest updates while Red Hat Enterprise Linux 7. X repositories stop getting updates after the minor version release. Note that Kickstart repositories only have minor versions. Procedure In the Satellite web UI, navigate to Content > Red Hat Repositories . To find repositories, either enter the repository name, or toggle the Recommended Repositories button to the on position to view a list of repositories that you require. In the Available Repositories pane, click a repository to expand the repository set. Click the Enable icon to the base architecture and release version that you want. CLI procedure To search for your product, enter the following command: List the repository set for the product: Enable the repository using either the name or ID number. Include the release version, such as 7Server , and base architecture, such as x86_64 . 4.7. Synchronizing repositories You must synchronize repositories to download content into Satellite. You can use this procedure for an initial synchronization of repositories or to synchronize repositories manually as you need. You can also sync all repositories in an organization. For more information, see Section 4.8, "Synchronizing all repositories in an organization" . Create a sync plan to ensure updates on a regular basis. For more information, see Section 4.24, "Creating a sync plan" . The synchronization duration depends on the size of each repository and the speed of your network connection. The following table provides estimates of how long it would take to synchronize content, depending on the available Internet bandwidth: Single Package (10Mb) Minor Release (750Mb) Major Release (6Gb) 256 Kbps 5 Mins 27 Secs 6 Hrs 49 Mins 36 Secs 2 Days 7 Hrs 55 Mins 512 Kbps 2 Mins 43.84 Secs 3 Hrs 24 Mins 48 Secs 1 Day 3 Hrs 57 Mins T1 (1.5 Mbps) 54.33 Secs 1 Hr 7 Mins 54.78 Secs 9 Hrs 16 Mins 20.57 Secs 10 Mbps 8.39 Secs 10 Mins 29.15 Secs 1 Hr 25 Mins 53.96 Secs 100 Mbps 0.84 Secs 1 Min 2.91 Secs 8 Mins 35.4 Secs 1000 Mbps 0.08 Secs 6.29 Secs 51.54 Secs Procedure In the Satellite web UI, navigate to Content > Products and select the product that contains the repositories that you want to synchronize. Select the repositories that you want to synchronize and click Sync Now . Optional: To view the progress of the synchronization in the Satellite web UI, navigate to Content > Sync Status and expand the corresponding product or repository tree. CLI procedure Synchronize an entire product: Synchronize an individual repository: 4.8. Synchronizing all repositories in an organization Use this procedure to synchronize all repositories within an organization. Procedure Log in to your Satellite Server using SSH. Run the following Bash script: ORG=" My_Organization " for i in USD(hammer --no-headers --csv repository list --organization USDORG --fields Id) do hammer repository synchronize --id USD{i} --organization USDORG --async done 4.9. Download policies overview Red Hat Satellite provides multiple download policies for synchronizing RPM content. For example, you might want to download only the content metadata while deferring the actual content download for later. Satellite Server has the following policies: Immediate Satellite Server downloads all metadata and packages during synchronization. On Demand Satellite Server downloads only the metadata during synchronization. Satellite Server only fetches and stores packages on the file system when Capsules or directly connected clients request them. This setting has no effect if you set a corresponding repository on a Capsule to Immediate because Satellite Server is forced to download all the packages. The On Demand policy acts as a Lazy Synchronization feature because they save time synchronizing content. The lazy synchronization feature must be used only for Yum repositories. You can add the packages to content views and promote to lifecycle environments as normal. Capsule Server has the following policies: Immediate Capsule Server downloads all metadata and packages during synchronization. Do not use this setting if the corresponding repository on Satellite Server is set to On Demand as Satellite Server is forced to download all the packages. On Demand Capsule Server only downloads the metadata during synchronization. Capsule Server fetches and stores packages only on the file system when directly connected clients request them. When you use an On Demand download policy, content is downloaded from Satellite Server if it is not available on Capsule Server. Inherit Capsule Server inherits the download policy for the repository from the corresponding repository on Satellite Server. Streamed Download Policy Streamed Download Policy for Capsules permits Capsules to avoid caching any content. When content is requested from the Capsule, it functions as a proxy and requests the content directly from the Satellite. 4.10. Changing the default download policy You can set the default download policy that Satellite applies to repositories that you create in all organizations. Depending on whether it is a Red Hat or non-Red Hat custom repository, Satellite uses separate settings. Changing the default value does not change existing settings. Procedure In the Satellite web UI, navigate to Administer > Settings . Click the Content tab. Change the default download policy depending on your requirements: To change the default download policy for a Red Hat repository, change the value of the Default Red Hat Repository download policy setting. To change the default download policy for a custom repository, change the value of the Default Custom Repository download policy setting. CLI procedure To change the default download policy for Red Hat repositories to one of immediate or on_demand , enter the following command: To change the default download policy for a non-Red Hat custom repository to one of immediate or on_demand , enter the following command: 4.11. Changing the download policy for a repository You can set the download policy for a repository. Procedure In the Satellite web UI, navigate to Content > Products . Select the required product name. On the Repositories tab, click the required repository name, locate the Download Policy field, and click the edit icon. From the list, select the required download policy and then click Save . CLI procedure List the repositories for an organization: Change the download policy for a repository to immediate or on_demand : 4.12. Mirroring policies overview Mirroring keeps the local repository exactly in synchronization with the upstream repository. If any content is removed from the upstream repository since the last synchronization, with the synchronization, it will be removed from the local repository as well. You can use mirroring policies for finer control over mirroring of repodata and content when synchronizing a repository. For example, if it is not possible to mirror the repodata for a repository, you can set the mirroring policy to mirror only content for this repository. Satellite Server has the following mirroring policies: Additive Neither the content nor the repodata is mirrored. Thus, only new content added since the last synchronization is added to the local repository and nothing is removed. Content Only Mirrors only content and not the repodata. Some repositories do not support metadata mirroring, in such cases you can set the mirroring policy to content only to only mirror the content. Complete Mirroring Mirrors content as well as repodata. This is the fastest method. This mirroring policy is only available for Yum content. Warning Avoid republishing metadata for repositories with Complete Mirror mirroring policy. This also applies to content views containing repositories with the Complete Mirror mirroring policy. 4.13. Changing the mirroring policy for a repository You can set the mirroring policy for a repository. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Content > Products . Select the product name. On the Repositories tab, click the repository name, locate the Mirroring Policy field, and click the edit icon. From the list, select a mirroring policy and click Save . CLI procedure List the repositories for an organization: Change the mirroring policy for a repository to additive , mirror_complete , or mirror_content_only : 4.14. Uploading content to custom RPM repositories You can upload individual RPMs and source RPMs to custom RPM repositories. You can upload RPMs using the Satellite web UI or the Hammer CLI. You must use the Hammer CLI to upload source RPMs. Procedure In the Satellite web UI, navigate to Content > Products . Click the name of the custom product. In the Repositories tab, click the name of the custom RPM repository. Under Upload Package , click Browse... and select the RPM you want to upload. Click Upload . To view all RPMs in this repository, click the number to Packages under Content Counts . CLI procedure Enter the following command to upload an RPM: Enter the following command to upload a source RPM: When the upload is complete, you can view information about a source RPM by using the commands hammer srpm list and hammer srpm info --id srpm_ID . 4.15. Refreshing content counts on Capsule If your Capsules have synchronized content enabled, you can refresh the number of content counts available to the environments associated with the Capsule. This displays the content views inside those environments available to the Capsule. You can then expand the content view to view the repositories associated with that content view version. Procedure In the Satellite web UI, navigate to Infrastructure > Capsules , and select the Capsule where you want to see the synchronized content. Select the Overview tab. Under Content Sync , toggle the Synchronize button to do an Optimized Sync or a Complete Sync to synchronize the Capsule which refreshes the content counts. Select the Content tab. Choose an Environment to view content views available to those Capsules by clicking > . Expand the content view by clicking > to view repositories available to the content view and the specific version for the environment. View the number of content counts under Packages specific to yum repositories. View the number of errata, package groups, files, container tags, container manifests, and Ansible collections under Additional content . Click the vertical ellipsis in the column to the right to the environment and click Refresh counts to refresh the content counts synchronized on the Capsule under Packages . 4.16. Configuring SELinux to permit content synchronization on custom ports SELinux permits access of Satellite for content synchronization only on specific ports. By default, connecting to web servers running on the following ports is permitted: 80, 81, 443, 488, 8008, 8009, 8443, and 9000. Procedure On Satellite, to verify the ports that are permitted by SELinux for content synchronization, enter a command as follows: To configure SELinux to permit a port for content synchronization, for example 10011, enter a command as follows: 4.17. Recovering a corrupted repository In case of repository corruption, you can recover it by using an advanced synchronization, which has three options: Optimized Sync Synchronizes the repository bypassing packages that have no detected differences from the upstream packages. Complete Sync Synchronizes all packages regardless of detected changes. Use this option if specific packages could not be downloaded to the local repository even though they exist in the upstream repository. Verify Content Checksum Synchronizes all packages and then verifies the checksum of all packages locally. If the checksum of an RPM differs from the upstream, it re-downloads the RPM. This option is relevant only for Yum content. Use this option if you have one of the following errors: Specific packages cause a 404 error while synchronizing with yum . Package does not match intended download error, which means that specific packages are corrupted. Procedure In the Satellite web UI, navigate to Content > Products . Select the product containing the corrupted repository. Select the name of a repository you want to synchronize. To perform optimized sync or complete sync, select Advanced Sync from the Select Action menu. Select the required option and click Sync . Optional: To verify the checksum, click Verify Content Checksum from the Select Action menu. CLI procedure Obtain a list of repository IDs: Synchronize a corrupted repository using the necessary option: For the optimized synchronization: For the complete synchronization: For the validate content synchronization: 4.18. Recovering corrupted content on Capsule If the client is unable to consume content from a published repository to which it has a subscription, the content has been corrupted and needs to be repaired. In case of content corruption on a Capsule, you can recover it by using the verify-checksum command in Hammer CLI. The verify-checksum command can repair content in a content view, lifecycle environment, repository, or all content on Capsule. You can track the progress of a command by navigating to Monitor > Satellite Tasks > Tasks and searching for the action Verify checksum for content on smart proxy . CLI procedure To repair content in a content view, run Hammer on your Capsule: To repair content in a lifecycle environment, run Hammer on your Capsule: To repair content in a repository, run Hammer on your Capsule: To repair all content on Capsule, run the following command: 4.19. Republishing repository metadata You can republish repository metadata when a repository distribution does not have the content that should be distributed based on the contents of the repository. Use this procedure with caution. Red Hat recommends a complete repository sync or publishing a new content view version to repair broken metadata. Procedure In the Satellite web UI, navigate to Content > Products . Select the product that includes the repository for which you want to republish metadata. On the Repositories tab, select a repository. To republish metadata for the repository, click Republish Repository Metadata from the Select Action menu. Note This action is not available for repositories that use the Complete Mirroring policy because the metadata is copied verbatim from the upstream source of the repository. 4.20. Republishing content view metadata Use this procedure to republish content view metadata. Procedure In the Satellite web UI, navigate to Content > Lifecycle > Content Views . Select a content view. On the Versions tab, select a content view version. To republish metadata for the content view version, click Republish repository metadata from the vertical ellipsis icon. Republishing repository metadata will regenerate metadata for all repositories in the content view version that do not adhere to the Complete Mirroring policy. 4.21. Adding an HTTP proxy Use this procedure to add HTTP proxies to Satellite. You can then specify which HTTP proxy to use for products, repositories, and supported compute resources. Prerequisites Your HTTP proxy must allow access to the following hosts: Host name Port Protocol subscription.rhsm.redhat.com 443 HTTPS cdn.redhat.com 443 HTTPS cert.console.redhat.com (if using Red Hat Insights) 443 HTTPS api.access.redhat.com (if using Red Hat Insights) 443 HTTPS cert-api.access.redhat.com (if using Red Hat Insights) 443 HTTPS If Satellite Server uses a proxy to communicate with subscription.rhsm.redhat.com and cdn.redhat.com then the proxy must not perform SSL inspection on these communications. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Infrastructure > HTTP Proxies . Select New HTTP Proxy . In the Name field, enter a name for the HTTP proxy. In the URL field, enter the URL for the HTTP proxy, including the port number. If your HTTP proxy requires authentication, enter a Username and Password . Optional: In the Test URL field, enter the HTTP proxy URL, then click Test Connection to ensure that you can connect to the HTTP proxy from Satellite. Click the Locations tab and add a location. Click the Organization tab and add an organization. Click Submit . CLI procedure On Satellite Server, enter the following command to add an HTTP proxy: If your HTTP proxy requires authentication, add the --username My_User_Name and --password My_Password options. For further information, see the Knowledgebase article How to access Red Hat Subscription Manager (RHSM) through a firewall or proxy on the Red Hat Customer Portal. 4.22. Changing the HTTP proxy policy for a product For granular control over network traffic, you can set an HTTP proxy policy for each product. A product's HTTP proxy policy applies to all repositories in the product, unless you set a different policy for individual repositories. To set an HTTP proxy policy for individual repositories, see Section 4.23, "Changing the HTTP proxy policy for a repository" . Procedure In the Satellite web UI, navigate to Content > Products and select the products that you want to change. From the Select Action list, select Manage HTTP Proxy . Select an HTTP Proxy Policy from the list: Global Default : Use the global default proxy setting. No HTTP Proxy : Do not use an HTTP proxy, even if a global default proxy is configured. Use specific HTTP Proxy : Select an HTTP Proxy from the list. You must add HTTP proxies to Satellite before you can select a proxy from this list. For more information, see Section 4.21, "Adding an HTTP proxy" . Click Update . 4.23. Changing the HTTP proxy policy for a repository For granular control over network traffic, you can set an HTTP proxy policy for each repository. To use the CLI instead of the Satellite web UI, see the CLI procedure . To set the same HTTP proxy policy for all repositories in a product, see Section 4.22, "Changing the HTTP proxy policy for a product" . Procedure In the Satellite web UI, navigate to Content > Products and click the name of the product that contains the repository. In the Repositories tab, click the name of the repository. Locate the HTTP Proxy field and click the edit icon. Select an HTTP Proxy Policy from the list: Global Default : Use the global default proxy setting. No HTTP Proxy : Do not use an HTTP proxy, even if a global default proxy is configured. Use specific HTTP Proxy : Select an HTTP Proxy from the list. You must add HTTP proxies to Satellite before you can select a proxy from this list. For more information, see Section 4.21, "Adding an HTTP proxy" . Click Save . CLI procedure On Satellite Server, enter the following command, specifying the HTTP proxy policy you want to use: Specify one of the following options for --http-proxy-policy : none : Do not use an HTTP proxy, even if a global default proxy is configured. global_default_http_proxy : Use the global default proxy setting. use_selected_http_proxy : Specify an HTTP proxy using either --http-proxy My_HTTP_Proxy_Name or --http-proxy-id My_HTTP_Proxy_ID . To add a new HTTP proxy to Satellite, see Section 4.21, "Adding an HTTP proxy" . 4.24. Creating a sync plan A sync plan checks and updates the content at a scheduled date and time. In Satellite, you can create a sync plan and assign products to the plan. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Content > Sync Plans and click New Sync Plan . In the Name field, enter a name for the plan. Optional: In the Description field, enter a description of the plan. From the Interval list, select the interval at which you want the plan to run. From the Start Date and Start Time lists, select when to start running the synchronization plan. Click Save . CLI procedure To create the synchronization plan, enter the following command: View the available sync plans for an organization to verify that the sync plan has been created: 4.25. Assigning a sync plan to a product A sync plan checks and updates the content at a scheduled date and time. In Satellite, you can assign a sync plan to products to update content regularly. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Content > Products . Select a product. On the Details tab, select a Sync Plan from the drop down menu. CLI procedure Assign a sync plan to a product: 4.26. Assigning a sync plan to multiple products Use this procedure to assign a sync plan to the products in an organization that have been synchronized at least once and contain at least one repository. Procedure Run the following Bash script: ORG=" My_Organization " SYNC_PLAN="daily_sync_at_3_a.m" hammer sync-plan create --name USDSYNC_PLAN --interval daily --sync-date "2023-04-5 03:00:00" --enabled true --organization USDORG for i in USD(hammer --no-headers --csv --csv-separator="|" product list --organization USDORG --per-page 999 | grep -vi not_synced | awk -F'|' 'USD5 != "0" { print USD1}') do hammer product set-sync-plan --sync-plan USDSYNC_PLAN --organization USDORG --id USDi done After executing the script, view the products assigned to the sync plan: 4.27. Best practices for sync plans Add sync plans to products and regularly synchronize content to keep the load on Satellite low during synchronization. Synchronize content rather more often than less often. For example, setup a sync plan to synchronize content every day rather than only once a month. Automate the creation and update of sync plans by using a Hammer script or an Ansible Playbook . Distribute synchronization tasks over several hours to reduce the task load by creating multiple sync plans with the Custom Cron tool. Table 4.1. Cron expression examples Cron expression Explanation 0 22 * * 1-5 every day at 22:00 from Monday to Friday 30 3 * * 6,0 at 03:30 every Saturday and Sunday 30 2 8-14 * * at 02:30 every day between the 8th and the 14th days of the month 4.28. Limiting synchronization concurrency By default, each Repository Synchronization job can fetch up to ten files at a time. This can be adjusted on a per repository basis. Increasing the limit may improve performance, but can cause the upstream server to be overloaded or start rejecting requests. If you are seeing Repository syncs fail due to the upstream servers rejecting requests, you may want to try lowering the limit. CLI procedure 4.29. Importing a custom GPG key When clients are consuming signed custom content, ensure that the clients are configured to validate the installation of packages with the appropriate GPG Key. This helps to ensure that only packages from authorized sources can be installed. Red Hat content is already configured with the appropriate GPG key and thus GPG Key management of Red Hat Repositories is not supported. To use the CLI instead of the Satellite web UI, see the CLI procedure . Prerequisites Ensure that you have a copy of the GPG key used to sign the RPM content that you want to use and manage in Satellite. Most RPM distribution providers provide their GPG Key on their website. You can also extract this manually from an RPM: Download a copy of the version specific repository package to your local machine: Extract the RPM file without installing it: The GPG key is located relative to the extraction at etc/pki/rpm-gpg/RPM-GPG-KEY- EXAMPLE-95 . Procedure In the Satellite web UI, navigate to Content > Content Credentials and in the upper-right of the window, click Create Content Credential . Enter the name of your repository and select GPG Key from the Type list. Either paste the GPG key into the Content Credential Contents field, or click Browse and select the GPG key file that you want to import. If your custom repository contains content signed by multiple GPG keys, you must enter all required GPG keys in the Content Credential Contents field with new lines between each key, for example: Click Save . CLI procedure Copy the GPG key to your Satellite Server: Upload the GPG key to Satellite: 4.30. Restricting a custom repository to a specific operating system or architecture in Satellite You can configure Satellite to make a custom repository available only on hosts with a specific operating system version or architecture. For example, you can restrict a custom repository only to Red Hat Enterprise Linux 9 hosts. Note Only restrict architecture and operating system version for custom products. Satellite applies these restrictions automatically for Red Hat repositories. Procedure In the Satellite web UI, navigate to Content > Products . Click the product that contains the repository sets you want to restrict. In the Repositories tab, click the repository you want to restrict. In the Publishing Settings section, set the following options: Set Restrict to OS version to restrict the operating system version. Set Restrict to architecture to restrict the architecture.
[ "scp My_SSL_Certificate [email protected]:~/.", "wget -P ~ http:// upstream-satellite.example.com /pub/katello-server-ca.crt", "hammer content-credential create --content-type cert --name \" My_SSL_Certificate \" --organization \" My_Organization \" --path ~/ My_SSL_Certificate", "hammer product create --name \" My_Product \" --sync-plan \" Example Plan \" --description \" Content from My Repositories \" --organization \" My_Organization \"", "hammer repository create --arch \" My_Architecture \" --content-type \"yum\" --gpg-key-id My_GPG_Key_ID --name \" My_Repository \" --organization \" My_Organization \" --os-version \" My_Operating_System_Version \" --product \" My_Product \" --publish-via-http true --url My_Upstream_URL", "hammer product list --organization \" My_Organization \"", "hammer repository-set list --product \"Red Hat Enterprise Linux Server\" --organization \" My_Organization \"", "hammer repository-set enable --name \"Red Hat Enterprise Linux 7 Server (RPMs)\" --releasever \"7Server\" --basearch \"x86_64\" --product \"Red Hat Enterprise Linux Server\" --organization \" My_Organization \"", "hammer product synchronize --name \" My_Product \" --organization \" My_Organization \"", "hammer repository synchronize --name \" My_Repository \" --organization \" My_Organization \" --product \" My Product \"", "ORG=\" My_Organization \" for i in USD(hammer --no-headers --csv repository list --organization USDORG --fields Id) do hammer repository synchronize --id USD{i} --organization USDORG --async done", "hammer settings set --name default_redhat_download_policy --value immediate", "hammer settings set --name default_download_policy --value immediate", "hammer repository list --organization-label My_Organization_Label", "hammer repository update --download-policy immediate --name \" My_Repository \" --organization-label My_Organization_Label --product \" My_Product \"", "hammer repository list --organization-label My_Organization_Label", "hammer repository update --id 1 --mirroring-policy mirror_complete", "hammer repository upload-content --id My_Repository_ID --path /path/to/example-package.rpm", "hammer repository upload-content --content-type srpm --id My_Repository_ID --path /path/to/example-package.src.rpm", "semanage port -l | grep ^http_port_t http_port_t tcp 80, 81, 443, 488, 8008, 8009, 8443, 9000", "semanage port -a -t http_port_t -p tcp 10011", "hammer repository list --organization \" My_Organization \"", "hammer repository synchronize --id My_ID", "hammer repository synchronize --id My_ID --skip-metadata-check true", "hammer repository synchronize --id My_ID --validate-contents true", "hammer capsule content verify-checksum --id My_Capsule_ID --organization-id 1 --content-view-id 3", "hammer capsule content verify-checksum --id My_Capsule_ID --organization-id 1 --lifecycle-environment-id 1", "hammer capsule content verify-checksum --id My_Capsule_ID --organization-id 1 --repository-id 1", "hammer capsule content verify-checksum --id My_Capsule_ID", "hammer http-proxy create --name My_HTTP_Proxy --url http-proxy.example.com:8080", "hammer repository update --http-proxy-policy HTTP_Proxy_Policy --id Repository_ID", "hammer sync-plan create --description \" My_Description \" --enabled true --interval daily --name \" My_Products \" --organization \" My_Organization \" --sync-date \"2023-01-01 01:00:00\"", "hammer sync-plan list --organization \" My_Organization \"", "hammer product set-sync-plan --name \" My_Product_Name \" --organization \" My_Organization \" --sync-plan \" My_Sync_Plan_Name \"", "ORG=\" My_Organization \" SYNC_PLAN=\"daily_sync_at_3_a.m\" hammer sync-plan create --name USDSYNC_PLAN --interval daily --sync-date \"2023-04-5 03:00:00\" --enabled true --organization USDORG for i in USD(hammer --no-headers --csv --csv-separator=\"|\" product list --organization USDORG --per-page 999 | grep -vi not_synced | awk -F'|' 'USD5 != \"0\" { print USD1}') do hammer product set-sync-plan --sync-plan USDSYNC_PLAN --organization USDORG --id USDi done", "hammer product list --organization USDORG --sync-plan USDSYNC_PLAN", "hammer repository update --download-concurrency 5 --id Repository_ID --organization \" My_Organization \"", "wget http://www.example.com/9.5/example-9.5-2.noarch.rpm", "rpm2cpio example-9.5-2.noarch.rpm | cpio -idmv", "-----BEGIN PGP PUBLIC KEY BLOCK----- mQINBFy/HE4BEADttv2TCPzVrre+aJ9f5QsR6oWZMm7N5Lwxjm5x5zA9BLiPPGFN 4aTUR/g+K1S0aqCU+ZS3Rnxb+6fnBxD+COH9kMqXHi3M5UNzbp5WhCdUpISXjjpU XIFFWBPuBfyr/FKRknFH15P+9kLZLxCpVZZLsweLWCuw+JKCMmnA =F6VG -----END PGP PUBLIC KEY BLOCK----- -----BEGIN PGP PUBLIC KEY BLOCK----- mQINBFw467UBEACmREzDeK/kuScCmfJfHJa0Wgh/2fbJLLt3KSvsgDhORIptf+PP OTFDlKuLkJx99ZYG5xMnBG47C7ByoMec1j94YeXczuBbynOyyPlvduma/zf8oB9e Wl5GnzcLGAnUSRamfqGUWcyMMinHHIKIc1X1P4I= =WPpI -----END PGP PUBLIC KEY BLOCK-----", "scp ~/etc/pki/rpm-gpg/RPM-GPG-KEY- EXAMPLE-95 [email protected]:~/.", "hammer content-credentials create --content-type gpg_key --name \" My_GPG_Key \" --organization \" My_Organization \" --path ~/RPM-GPG-KEY- EXAMPLE-95" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/managing_content/importing_content_content-management
Chapter 5. Accessing AD with a Managed Service Account
Chapter 5. Accessing AD with a Managed Service Account Active Directory (AD) Managed Service Accounts (MSAs) allow you to create an account in AD that corresponds to a specific computer. You can use an MSA to connect to AD resources as a specific user principal, without joining the RHEL host to the AD domain. 5.1. The benefits of a Managed Service Account If you want to allow a RHEL host to access an Active Directory (AD) domain without joining it, you can use a Managed Service Account (MSA) to access that domain. An MSA is an account in AD that corresponds to a specific computer, which you can use to connect to AD resources as a specific user principal. For example, if the AD domain production.example.com has a one-way trust relationship with the lab.example.com AD domain, the following conditions apply: The lab domain trusts users and hosts from the production domain. The production domain does not trust users and hosts from the lab domain. This means that a host joined to the lab domain, such as client.lab.example.com , cannot access resources from the production domain through the trust. If you want to create an exception for the client.lab.example.com host, you can use the adcli utility to create a MSA for the client host in the production.example.com domain. By authenticating with the Kerberos principal of the MSA, you can perform secure LDAP searches in the production domain from the client host. 5.2. Configuring a Managed Service Account for a RHEL host This procedure creates a Managed Service Account (MSA) for a host from the lab.example.com Active Directory (AD) domain, and configures SSSD so you can access and authenticate to the production.example.com AD domain. Note If you need to access AD resources from a RHEL host, Red Hat recommends that you join the RHEL host to the AD domain with the realm command. See Connecting RHEL systems directly to AD using SSSD . Only perform this procedure if one of the following conditions applies: You cannot join the RHEL host to the AD domain, and you want to create an account for that host in AD. You have joined the RHEL host to an AD domain, and you need to access another AD domain where the host credentials from the domain you have joined are not valid, such as with a one-way trust. Prerequisites Ensure that the following ports on the RHEL host are open and accessible to the AD domain controllers. Service Port Protocols DNS 53 TCP, UDP LDAP 389 TCP, UDP LDAPS (optional) 636 TCP, UDP Kerberos 88 TCP, UDP You have the password for an AD Administrator that has rights to create MSAs in the production.example.com domain. You have root permissions that are required to run the adcli command, and to modify the /etc/sssd/sssd.conf configuration file.. (Optional) You have the krb5-workstation package installed, which includes the klist diagnostic utility. Procedure Create an MSA for the host in the production.example.com AD domain. Display information about the MSA from the Kerberos keytab that was created. Make note of the MSA name: Open the /etc/sssd/sssd.conf file and choose the appropriate SSSD domain configuration to add: If the MSA corresponds to an AD domain from a different forest , create a new domain section named [domain/<name_of_domain>] , and enter information about the MSA and the keytab. The most important options are ldap_sasl_authid , ldap_krb5_keytab , and krb5_keytab : If the MSA corresponds to an AD domain from the local forest , create a new sub-domain section in the format [domain/root.example.com/sub-domain.example.com] , and enter information about the MSA and the keytab. The most important options are ldap_sasl_authid , ldap_krb5_keytab , and krb5_keytab : Verification Verify you can retrieve a Kerberos ticket-granting ticket (TGT) as the MSA: In AD, verify you have an MSA for the host in the Managed Service Accounts Organizational Unit (OU). Additional resources Connecting RHEL systems directly to AD using SSSD 5.3. Updating the password for a Managed Service Account Managed Service Accounts (MSAs) have a complex password that is maintained automatically by Active Directory (AD). By default, the System Services Security Daemon (SSSD) automatically updates the MSA password in the Kerberos keytab if it is older than 30 days, which keeps it up to date with the password in AD. This procedure explains how to manually update the password for your MSA. Prerequisites You have previously created an MSA for a host in the production.example.com AD domain. (Optional) You have the krb5-workstation package installed, which includes the klist diagnostic utility. Procedure Optional: Display the current Key Version Number (KVNO) for the MSA in the Kerberos keytab. The current KVNO is 2. Update the password for the MSA in the production.example.com AD domain. Verification Verify that you have incremented the KVNO in the Kerberos keytab: 5.4. Managed Service Account specifications The Managed Service Accounts (MSAs) that the adcli utility creates have the following specifications: They cannot have additional service principal names (SPNs). By default, the Kerberos principal for the MSA is stored in a Kerberos keytab named <default_keytab_location>.<Active_Directory_domain> , like /etc/krb5.keytab.production.example.com . MSA names are limited to 20 characters or fewer. The last 4 characters are a suffix of 3 random characters from number and upper- and lowercase ASCII ranges appended to the short host name you provide, using a ! character as a separator. For example, a host with the short name myhost receives an MSA with the following specifications: Specification Value Common name (CN) attribute myhost!A2c NetBIOS name myhost!A2cUSD sAMAccountName myhost!A2cUSD Kerberos principal in the production.example.com AD domain [email protected] 5.5. Options for the adcli create-msa command In addition to the global options you can pass to the adcli utility, you can specify the following options to specifically control how it handles Managed Service Accounts (MSAs). -N , --computer-name The short non-dotted name of the MSA that will be created in the Active Directory (AD) domain. If you do not specify a name, the first portion of the --host-fqdn or its default is used with a random suffix. -O , --domain-ou=OU= <path_to_OU> The full distinguished name of the Organizational Unit (OU) in which to create the MSA. If you do not specify this value, the MSA is created in the default location OU=CN=Managed Service Accounts,DC=EXAMPLE,DC=COM . -H , --host-fqdn=host Override the local machine's fully qualified DNS domain name. If you do not specify this option, the host name of the local machine is used. -K , --host-keytab= <path_to_keytab> The path to the host keytab to store MSA credentials. If you do not specify this value, the default location /etc/krb5.keytab is used with the lower-cased Active Directory domain name added as a suffix, such as /etc/krb5.keytab.domain.example.com . --use-ldaps Create the MSA over a Secure LDAP (LDAPS) channel. --verbose Print out detailed information while creating the MSA. --show-details Print out information about the MSA after creating it. --show-password Print out the MSA password after creating the MSA.
[ "adcli create-msa --domain=production.example.com", "klist -k /etc/krb5.keytab.production.example.com Keytab name: FILE:/etc/krb5.keytab.production.example.com KVNO Principal ---- ------------------------------------------------------------ 2 [email protected] (aes256-cts-hmac-sha1-96) 2 [email protected] (aes128-cts-hmac-sha1-96)", "[domain/ production.example.com ] ldap_sasl_authid = [email protected] ldap_krb5_keytab = /etc/krb5.keytab.production.example.com krb5_keytab = /etc/krb5.keytab.production.example.com ad_domain = production.example.com krb5_realm = PRODUCTION.EXAMPLE.COM access_provider = ad", "[domain/ ad.example.com/production.example.com ] ldap_sasl_authid = [email protected] ldap_krb5_keytab = /etc/krb5.keytab.production.example.com krb5_keytab = /etc/krb5.keytab.production.example.com ad_domain = production.example.com krb5_realm = PRODUCTION.EXAMPLE.COM access_provider = ad", "kinit -k -t /etc/krb5.keytab.production.example.com 'CLIENT!S3AUSD' klist Ticket cache: KCM:0:54655 Default principal: [email protected] Valid starting Expires Service principal 11/22/2021 15:48:03 11/23/2021 15:48:03 krbtgt/[email protected]", "klist -k /etc/krb5.keytab.production.example.com Keytab name: FILE:/etc/krb5.keytab.production.example.com KVNO Principal ---- ------------------------------------------------------------ 2 [email protected] (aes256-cts-hmac-sha1-96) 2 [email protected] (aes128-cts-hmac-sha1-96)", "adcli update --domain=production.example.com --host-keytab=/etc/krb5.keytab.production.example.com --computer-password-lifetime=0", "klist -k /etc/krb5.keytab.production.example.com Keytab name: FILE:/etc/krb5.keytab.production.example.com KVNO Principal ---- ------------------------------------------------------------ 3 [email protected] (aes256-cts-hmac-sha1-96) 3 [email protected] (aes128-cts-hmac-sha1-96)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/integrating_rhel_systems_directly_with_windows_active_directory/assembly_accessing-ad-with-a-managed-service-account_integrating-rhel-systems-directly-with-active-directory
Chapter 3. Preparing for director installation
Chapter 3. Preparing for director installation 3.1. Preparing the undercloud Before you can install director, you must complete some basic configuration on the host machine: A non-root user to execute commands. Directories to organize images and templates. A resolvable hostname. A Red Hat subscription. The command line tools for image preparation and director installation. Procedure Log in to your undercloud as the root user. Create the stack user: Set a password for the user: Disable password requirements when using sudo : Switch to the new stack user: Create directories for system images and heat templates: Director uses system images and heat templates to create the overcloud environment. Red Hat recommends creating these directories to help you organize your local file system. Check the base and full hostname of the undercloud: If either of the commands do not report the correct fully-qualified hostname or report an error, use hostnamectl to set a hostname: Edit the /etc/hosts and include an entry for the system hostname. The IP address in /etc/hosts must match the address that you plan to use for your undercloud public API. For example, if the system is named manager.example.com and uses 10.0.0.1 for its IP address, add the following line to the /etc/hosts file: Register your system either with the Red Hat Content Delivery Network or with a Red Hat Satellite. For example, run the following command to register the system to the Content Delivery Network. Enter your Customer Portal user name and password when prompted: Find the entitlement pool ID for Red Hat OpenStack Platform (RHOSP) director: Locate the Pool ID value and attach the Red Hat OpenStack Platform 16.0 entitlement: Lock the undercloud to Red Hat Enterprise Linux 8.1: Disable all default repositories, and then enable the required Red Hat Enterprise Linux repositories: These repositories contain packages that the director installation requires. Perform an update on your system to ensure that you have the latest base system packages: Install the command line tools for director installation and configuration: 3.2. Installing ceph-ansible The ceph-ansible package is required when you use Ceph Storage with Red Hat OpenStack Platform. If you use Red Hat Ceph Storage, or if your deployment uses an external Ceph Storage cluster, install the ceph-ansible package. For more information about integrating with an existing Ceph Storage cluster, see Integrating an Overcloud with an Existing Red Hat Ceph Cluster . Procedure Enable the Ceph Tools repository: Install the ceph-ansible package: 3.3. Preparing container images The undercloud configuration requires initial registry configuration to determine where to obtain images and how to store them. Complete the following steps to generate and customize an environment file that you can use to prepare your container images. Procedure Log in to your undercloud host as the stack user. Generate the default container image preparation file: This command includes the following additional options: --local-push-destination sets the registry on the undercloud as the location for container images. This means the director pulls the necessary images from the Red Hat Container Catalog and pushes them to the registry on the undercloud. The director uses this registry as the container image source. To pull directly from the Red Hat Container Catalog, omit this option. --output-env-file is an environment file name. The contents of this file include the parameters for preparing your container images. In this case, the name of the file is containers-prepare-parameter.yaml . Note You can use the same containers-prepare-parameter.yaml file to define a container image source for both the undercloud and the overcloud. Modify the containers-prepare-parameter.yaml to suit your requirements. 3.4. Container image preparation parameters The default file for preparing your containers ( containers-prepare-parameter.yaml ) contains the ContainerImagePrepare heat parameter. This parameter defines a list of strategies for preparing a set of images: Each strategy accepts a set of sub-parameters that defines which images to use and what to do with the images. The following table contains information about the sub-parameters you can use with each ContainerImagePrepare strategy: Parameter Description excludes List of regular expressions to exclude image names from a strategy. includes List of regular expressions to include in a strategy. At least one image name must match an existing image. All excludes are ignored if includes is specified. modify_append_tag String to append to the tag for the destination image. For example, if you pull an image with the tag 14.0-89 and set the modify_append_tag to -hotfix , the director tags the final image as 14.0-89-hotfix . modify_only_with_labels A dictionary of image labels that filter the images that you want to modify. If an image matches the labels defined, the director includes the image in the modification process. modify_role String of ansible role names to run during upload but before pushing the image to the destination registry. modify_vars Dictionary of variables to pass to modify_role . push_destination Defines the namespace of the registry that you want to push images to during the upload process. If set to true , the push_destination is set to the undercloud registry namespace using the hostname, which is the recommended method. If set to false , the push to a local registry does not occur and nodes pull images directly from the source. If set to a custom value, director pushes images to an external local registry. If you choose to pull container images directly from the Red Hat Container Catalog, do not set this parameter to false in production environments or else all overcloud nodes will simultaneously pull the images from the Red Hat Container Catalog over your external connection, which can cause bandwidth issues. If the push_destination parameter is set to false or is not defined and the remote registry requires authentication, set the ContainerImageRegistryLogin parameter to true and include the credentials with the ContainerImageRegistryCredentials parameter. pull_source The source registry from where to pull the original container images. set A dictionary of key: value definitions that define where to obtain the initial images. tag_from_label Use the value of specified container image labels to discover and pull the versioned tag for every image. Director inspects each container image tagged with the value that you set for tag , then uses the container image labels to construct a new tag, which director pulls from the registry. For example, if you set tag_from_label: {version}-{release} , director uses the version and release labels to construct a new tag. For one container, version might be set to 13.0 and release might be set to 34 , which results in the tag 13.0-34 . The set parameter accepts a set of key: value definitions: Key Description ceph_image The name of the Ceph Storage container image. ceph_namespace The namespace of the Ceph Storage container image. ceph_tag The tag of the Ceph Storage container image. name_prefix A prefix for each OpenStack service image. name_suffix A suffix for each OpenStack service image. namespace The namespace for each OpenStack service image. neutron_driver The driver to use to determine which OpenStack Networking (neutron) container to use. Use a null value to set to the standard neutron-server container. Set to ovn to use OVN-based containers. tag Sets the specific tag for all images from the source. If you use this option without specifying a tag_from_label value, director pulls all container images that use this tag. However, if you use this option in combination with tag_from_label value, director uses the tag as a source image to identify a specific version tag based on labels. Keep this key set to the default value, which is the Red Hat OpenStack Platform version number. Important The Red Hat Container Registry uses a specific version format to tag all Red Hat OpenStack Platform container images. This version format is {version}-{release} , which each container image stores as labels in the container metadata. This version format helps facilitate updates from one {release} to the . For this reason, you must always use the tag_from_label: {version}-{release} parameter with the ContainerImagePrepare heat parameter. Do not only use tag on its own to to pull container images. For example, using tag by itself causes problems when performing updates because director requires a change in tag to update a container image. Important The container images use multi-stream tags based on Red Hat OpenStack Platform version. This means there is no longer a latest tag. The ContainerImageRegistryCredentials parameter maps a container registry to a username and password to authenticate to that registry. If a container registry requires a username and password, you can use ContainerImageRegistryCredentials to include credentials with the following syntax: In the example, replace my_username and my_password with your authentication credentials. Instead of using your individual user credentials, Red Hat recommends creating a registry service account and using those credentials to access registry.redhat.io content. For more information, see "Red Hat Container Registry Authentication" . The ContainerImageRegistryLogin parameter is used to control the registry login on the systems being deployed. This must be set to true if push_destination is set to false or not used. 3.5. Layering image preparation entries The value of the ContainerImagePrepare parameter is a YAML list. This means that you can specify multiple entries. The following example demonstrates two entries where director uses the latest version of all images except for the nova-api image, which uses the version tagged with 16.0-44 : The includes and excludes parameters use regular expressions to control image filtering for each entry. The images that match the includes strategy take precedence over excludes matches. The image name must the includes or excludes regular expression value to be considered a match. 3.6. Excluding Ceph Storage container images The default overcloud role configuration uses the default Controller, Compute, and Ceph Storage roles. However, if you use the default role configuration to deploy an overcloud without Ceph Storage nodes, director still pulls the Ceph Storage container images from the Red Hat Container Registry because the images are included as a part of the default configuration. If your overcloud does not require Ceph Storage containers, you can configure director to not pull the Ceph Storage containers images from the Red Hat Container Registry. Procedure Edit the containers-prepare-parameter.yaml file to exclude the Ceph Storage containers: The excludes parameter uses regular expressions to exclude any container images that contain the ceph or prometheus strings. Save the containers-prepare-parameter.yaml file. 3.7. Obtaining container images from private registries Some container image registries require authentication to access images. In this situation, use the ContainerImageRegistryCredentials parameter in your containers-prepare-parameter.yaml environment file. Important Private registries require push_destination set to true for their respective strategy in the ContainerImagePrepare . The ContainerImageRegistryCredentials parameter uses a set of keys based on the private registry URL. Each private registry URL uses its own key and value pair to define the username (key) and password (value). This provides a method to specify credentials for multiple private registries. Important The default ContainerImagePrepare parameter pulls container images from registry.redhat.io , which requires authentication. The ContainerImageRegistryLogin parameter is used to control whether the system needs to log in to the remote registry to fetch the containers. Important You must set this value to true if push_destination is not configured for a given strategy. If push_destination is configured in a ContainerImagePrepare strategy and the ContainerImageRegistryCredentials parameter is configured, the system logs in to fetch the containers and pushes them to the remote system. 3.8. Modifying images during preparation It is possible to modify images during image preparation, and then immediately deploy with modified images. Scenarios for modifying images include: As part of a continuous integration pipeline where images are modified with the changes being tested before deployment. As part of a development workflow where local changes must be deployed for testing and development. When changes must be deployed but are not available through an image build pipeline. For example, adding proprietary add-ons or emergency fixes. To modify an image during preparation, invoke an Ansible role on each image that you want to modify. The role takes a source image, makes the requested changes, and tags the result. The prepare command can push the image to the destination registry and set the heat parameters to refer to the modified image. The Ansible role tripleo-modify-image conforms with the required role interface and provides the behaviour necessary for the modify use cases. Control the modification with the modify-specific keys in the ContainerImagePrepare parameter: modify_role specifies the Ansible role to invoke for each image to modify. modify_append_tag appends a string to the end of the source image tag. This makes it obvious that the resulting image has been modified. Use this parameter to skip modification if the push_destination registry already contains the modified image. Change modify_append_tag whenever you modify the image. modify_vars is a dictionary of Ansible variables to pass to the role. To select a use case that the tripleo-modify-image role handles, set the tasks_from variable to the required file in that role. While developing and testing the ContainerImagePrepare entries that modify images, run the image prepare command without any additional options to confirm that the image is modified as you expect: 3.9. Updating existing packages on container images The following example ContainerImagePrepare entry updates all packages on the images using the dnf repository configuration on the undercloud host: 3.10. Installing additional RPM files to container images You can install a directory of RPM files in your container images. This is useful for installing hotfixes, local package builds, or any package that is not available through a package repository. For example, the following ContainerImagePrepare entry installs some hotfix packages only on the nova-compute image: 3.11. Modifying container images with a custom Dockerfile For maximum flexibility, you can specify a directory containing a Dockerfile to make the required changes. When you invoke the tripleo-modify-image role, the role generates a Dockerfile.modified file that changes the FROM directive and adds extra LABEL directives. The following example runs the custom Dockerfile on the nova-compute image: The following example shows the /home/stack/nova-custom/Dockerfile file. After you run any USER root directives, you must switch back to the original image default user: 3.12. Preparing a Satellite server for container images Red Hat Satellite 6 offers registry synchronization capabilities that you can use to pull multiple images into a Satellite server and manage them as part of an application life cycle. The Satellite also acts as a registry for other container-enabled systems to use. For more information about managing container images, see "Managing Container Images" in the Red Hat Satellite 6 Content Management Guide . The examples in this procedure use the hammer command line tool for Red Hat Satellite 6 and an example organization called ACME . Substitute this organization for your own Satellite 6 organization. Note This procedure requires authentication credentials to access container images from registry.redhat.io . Instead of using your individual user credentials, Red Hat recommends creating a registry service account and using those credentials to access registry.redhat.io content. For more information, see "Red Hat Container Registry Authentication" . Procedure Create a list of all container images: Copy the satellite_images file to a system that contains the Satellite 6 hammer tool. Alternatively, use the instructions in the Hammer CLI Guide to install the hammer tool to the undercloud. Run the following hammer command to create a new product ( OSP16 Containers ) in your Satellite organization: This custom product will contain your images. Add the base container image to the product: Add the overcloud container images from the satellite_images file: Add the Ceph Storage 4 container image: Synchronize the container images: Wait for the Satellite server to complete synchronization. Note Depending on your configuration, hammer might prompt you for your Satellite server username and password. You can configure hammer to log in automatically using a configuration file. For more information, see the "Authentication" section in the Hammer CLI Guide . If your Satellite 6 server uses content views, create a new content view version to incorporate the images and promote it along environments in your application life cycle. This largely depends on how you structure your application lifecycle. For example, if you have an environment called production in your lifecycle and you want the container images to be available in that environment, create a content view that includes the container images and promote that content view to the production environment. For more information, see "Managing Content Views" . Check the available tags for the base image: This command displays tags for the OpenStack Platform container images within a content view for a particular environment. Return to the undercloud and generate a default environment file that prepares images using your Satellite server as a source. Run the following example command to generate the environment file: --output-env-file is an environment file name. The contents of this file include the parameters for preparing your container images for the undercloud. In this case, the name of the file is containers-prepare-parameter.yaml . Edit the containers-prepare-parameter.yaml file and modify the following parameters: push_destination - Set this to true or false depending on your chosen container image management strategy. If you set this parameter to false , the overcloud nodes pull images directly from the Satellite. If you set this parameter to true , the director pulls the images from the Satellite to the undercloud registry and the overcloud pulls the images from the undercloud registry. namespace - The URL and port of the registry on the Satellite server. The default registry port on Red Hat Satellite is 5000. name_prefix - The prefix is based on a Satellite 6 convention. This differs depending on whether you use content views: If you use content views, the structure is [org]-[environment]-[content view]-[product]- . For example: acme-production-myosp16-osp16_containers- . If you do not use content views, the structure is [org]-[product]- . For example: acme-osp16_containers- . ceph_namespace , ceph_image , ceph_tag - If you use Ceph Storage, include these additional parameters to define the Ceph Storage container image location. Note that ceph_image now includes a Satellite-specific prefix. This prefix is the same value as the name_prefix option. The following example environment file contains Satellite-specific parameters: You must define the containers-prepare-parameter.yaml environment file in the undercloud.conf configuration file, otherwise the undercloud uses the default values:
[ "useradd stack", "passwd stack", "echo \"stack ALL=(root) NOPASSWD:ALL\" | tee -a /etc/sudoers.d/stack chmod 0440 /etc/sudoers.d/stack", "su - stack [stack@director ~]USD", "[stack@director ~]USD mkdir ~/images [stack@director ~]USD mkdir ~/templates", "[stack@director ~]USD hostname [stack@director ~]USD hostname -f", "[stack@director ~]USD sudo hostnamectl set-hostname manager.example.com [stack@director ~]USD sudo hostnamectl set-hostname --transient manager.example.com", "10.0.0.1 manager.example.com manager", "[stack@director ~]USD sudo subscription-manager register", "[stack@director ~]USD sudo subscription-manager list --available --all --matches=\"Red Hat OpenStack\" Subscription Name: Name of SKU Provides: Red Hat Single Sign-On Red Hat Enterprise Linux Workstation Red Hat CloudForms Red Hat OpenStack Red Hat Software Collections (for RHEL Workstation) Red Hat Virtualization SKU: SKU-Number Contract: Contract-Number Pool ID: Valid-Pool-Number-123456 Provides Management: Yes Available: 1 Suggested: 1 Service Level: Support-level Service Type: Service-Type Subscription Type: Sub-type Ends: End-date System Type: Physical", "[stack@director ~]USD sudo subscription-manager attach --pool=Valid-Pool-Number-123456", "sudo subscription-manager release --set=8.1", "[stack@director ~]USD sudo subscription-manager repos --disable=* [stack@director ~]USD sudo subscription-manager repos --enable=rhel-8-for-x86_64-baseos-eus-rpms --enable=rhel-8-for-x86_64-appstream-eus-rpms --enable=rhel-8-for-x86_64-highavailability-eus-rpms --enable=ansible-2.8-for-rhel-8-x86_64-rpms --enable=openstack-16-for-rhel-8-x86_64-rpms --enable=fast-datapath-for-rhel-8-x86_64-rpms", "[stack@director ~]USD sudo dnf update -y [stack@director ~]USD sudo reboot", "[stack@director ~]USD sudo dnf install -y python3-tripleoclient", "[stack@director ~]USD sudo subscription-manager repos --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms", "[stack@director ~]USD sudo dnf install -y ceph-ansible", "openstack tripleo container image prepare default --local-push-destination --output-env-file containers-prepare-parameter.yaml", "parameter_defaults: ContainerImagePrepare: - (strategy one) - (strategy two) - (strategy three)", "ContainerImagePrepare: - push_destination: true set: namespace: registry.redhat.io/ ContainerImageRegistryCredentials: registry.redhat.io: my_username: my_password", "ContainerImagePrepare: - set: namespace: registry.redhat.io/ ContainerImageRegistryCredentials: registry.redhat.io: my_username: my_password ContainerImageRegistryLogin: true", "ContainerImagePrepare: - tag_from_label: \"{version}-{release}\" push_destination: true excludes: - nova-api set: namespace: registry.redhat.io/rhosp-rhel8 name_prefix: openstack- name_suffix: '' tag: 16.0 - push_destination: true includes: - nova-api set: namespace: registry.redhat.io/rhosp-rhel8 tag: 16.0-44", "parameter_defaults: ContainerImagePrepare: - push_destination: true excludes: - ceph - prometheus set: ...", "parameter_defaults: ContainerImagePrepare: - (strategy one) - (strategy two) - (strategy three) ContainerImageRegistryCredentials: registry.example.com: username: \"p@55w0rd!\"", "parameter_defaults: ContainerImageRegistryCredentials: registry.redhat.io: myuser: 'p@55w0rd!' registry.internalsite.com: myuser2: '0th3rp@55w0rd!' '192.0.2.1:8787': myuser3: '@n0th3rp@55w0rd!'", "parameter_defaults: ContainerImageRegistryLogin: true", "sudo openstack tripleo container image prepare -e ~/containers-prepare-parameter.yaml", "ContainerImagePrepare: - push_destination: true modify_role: tripleo-modify-image modify_append_tag: \"-updated\" modify_vars: tasks_from: yum_update.yml compare_host_packages: true yum_repos_dir_path: /etc/yum.repos.d", "ContainerImagePrepare: - push_destination: true includes: - nova-compute modify_role: tripleo-modify-image modify_append_tag: \"-hotfix\" modify_vars: tasks_from: rpm_install.yml rpms_path: /home/stack/nova-hotfix-pkgs", "ContainerImagePrepare: - push_destination: true includes: - nova-compute modify_role: tripleo-modify-image modify_append_tag: \"-hotfix\" modify_vars: tasks_from: modify_image.yml modify_dir_path: /home/stack/nova-custom", "FROM registry.redhat.io/rhosp-rhel8/openstack-nova-compute:latest USER \"root\" COPY customize.sh /tmp/ RUN /tmp/customize.sh USER \"nova\"", "sudo podman search --limit 1000 \"registry.redhat.io/rhosp\" | grep rhosp-rhel8 | awk '{ print USD2 }' | grep -v beta | sed \"s/registry.redhat.io\\///g\" | tail -n+2 > satellite_images", "hammer product create --organization \"ACME\" --name \"OSP16 Containers\"", "hammer repository create --organization \"ACME\" --product \"OSP16 Containers\" --content-type docker --url https://registry.redhat.io --docker-upstream-name rhosp-rhel8/openstack-base --upstream-username USERNAME --upstream-password PASSWORD --name base", "while read IMAGE; do IMAGENAME=USD(echo USDIMAGE | cut -d\"/\" -f2 | sed \"s/openstack-//g\" | sed \"s/:.*//g\") ; hammer repository create --organization \"ACME\" --product \"OSP16 Containers\" --content-type docker --url https://registry.redhat.io --docker-upstream-name USDIMAGE --upstream-username USERNAME --upstream-password PASSWORD --name USDIMAGENAME ; done < satellite_images", "hammer repository create --organization \"ACME\" --product \"OSP16 Containers\" --content-type docker --url https://registry.redhat.io --docker-upstream-name rhceph-beta/rhceph-4-rhel8 --upstream-username USERNAME --upstream-password PASSWORD --name rhceph-4-rhel8", "hammer product synchronize --organization \"ACME\" --name \"OSP16 Containers\"", "hammer docker tag list --repository \"base\" --organization \"ACME\" --lifecycle-environment \"production\" --content-view \"myosp16\" --product \"OSP16 Containers\"", "openstack tripleo container image prepare default --output-env-file containers-prepare-parameter.yaml", "parameter_defaults: ContainerImagePrepare: - push_destination: false set: ceph_image: acme-production-myosp16-osp16_containers-rhceph-4 ceph_namespace: satellite.example.com:5000 ceph_tag: latest name_prefix: acme-production-myosp16-osp16_containers- name_suffix: '' namespace: satellite.example.com:5000 neutron_driver: null tag: 16.0 tag_from_label: '{version}-{release}'", "container_images_file = /home/stack/containers-prepare-parameter.yaml" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/director_installation_and_usage/preparing-for-director-installation
Chapter 5. HardwareData [metal3.io/v1alpha1]
Chapter 5. HardwareData [metal3.io/v1alpha1] Description HardwareData is the Schema for the hardwaredata API Type object 5.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object HardwareDataSpec defines the desired state of HardwareData 5.1.1. .spec Description HardwareDataSpec defines the desired state of HardwareData Type object Property Type Description hardware object The hardware discovered on the host during its inspection. 5.1.2. .spec.hardware Description The hardware discovered on the host during its inspection. Type object Property Type Description cpu object CPU describes one processor on the host. firmware object Firmware describes the firmware on the host. hostname string nics array nics[] object NIC describes one network interface on the host. ramMebibytes integer storage array storage[] object Storage describes one storage device (disk, SSD, etc.) on the host. systemVendor object HardwareSystemVendor stores details about the whole hardware system. 5.1.3. .spec.hardware.cpu Description CPU describes one processor on the host. Type object Property Type Description arch string clockMegahertz number ClockSpeed is a clock speed in MHz count integer flags array (string) model string 5.1.4. .spec.hardware.firmware Description Firmware describes the firmware on the host. Type object Property Type Description bios object The BIOS for this firmware 5.1.5. .spec.hardware.firmware.bios Description The BIOS for this firmware Type object Property Type Description date string The release/build date for this BIOS vendor string The vendor name for this BIOS version string The version of the BIOS 5.1.6. .spec.hardware.nics Description Type array 5.1.7. .spec.hardware.nics[] Description NIC describes one network interface on the host. Type object Property Type Description ip string The IP address of the interface. This will be an IPv4 or IPv6 address if one is present. If both IPv4 and IPv6 addresses are present in a dual-stack environment, two nics will be output, one with each IP. mac string The device MAC address model string The vendor and product IDs of the NIC, e.g. "0x8086 0x1572" name string The name of the network interface, e.g. "en0" pxe boolean Whether the NIC is PXE Bootable speedGbps integer The speed of the device in Gigabits per second vlanId integer The untagged VLAN ID vlans array The VLANs available vlans[] object VLAN represents the name and ID of a VLAN 5.1.8. .spec.hardware.nics[].vlans Description The VLANs available Type array 5.1.9. .spec.hardware.nics[].vlans[] Description VLAN represents the name and ID of a VLAN Type object Property Type Description id integer VLANID is a 12-bit 802.1Q VLAN identifier name string 5.1.10. .spec.hardware.storage Description Type array 5.1.11. .spec.hardware.storage[] Description Storage describes one storage device (disk, SSD, etc.) on the host. Type object Property Type Description hctl string The SCSI location of the device model string Hardware model name string The Linux device name of the disk, e.g. "/dev/sda". Note that this may not be stable across reboots. rotational boolean Whether this disk represents rotational storage. This field is not recommended for usage, please prefer using 'Type' field instead, this field will be deprecated eventually. serialNumber string The serial number of the device sizeBytes integer The size of the disk in Bytes type string Device type, one of: HDD, SSD, NVME. vendor string The name of the vendor of the device wwn string The WWN of the device wwnVendorExtension string The WWN Vendor extension of the device wwnWithExtension string The WWN with the extension 5.1.12. .spec.hardware.systemVendor Description HardwareSystemVendor stores details about the whole hardware system. Type object Property Type Description manufacturer string productName string serialNumber string 5.2. API endpoints The following API endpoints are available: /apis/metal3.io/v1alpha1/hardwaredata GET : list objects of kind HardwareData /apis/metal3.io/v1alpha1/namespaces/{namespace}/hardwaredata DELETE : delete collection of HardwareData GET : list objects of kind HardwareData POST : create a HardwareData /apis/metal3.io/v1alpha1/namespaces/{namespace}/hardwaredata/{name} DELETE : delete a HardwareData GET : read the specified HardwareData PATCH : partially update the specified HardwareData PUT : replace the specified HardwareData 5.2.1. /apis/metal3.io/v1alpha1/hardwaredata Table 5.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list objects of kind HardwareData Table 5.2. HTTP responses HTTP code Reponse body 200 - OK HardwareDataList schema 401 - Unauthorized Empty 5.2.2. /apis/metal3.io/v1alpha1/namespaces/{namespace}/hardwaredata Table 5.3. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 5.4. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of HardwareData Table 5.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 5.6. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind HardwareData Table 5.7. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 5.8. HTTP responses HTTP code Reponse body 200 - OK HardwareDataList schema 401 - Unauthorized Empty HTTP method POST Description create a HardwareData Table 5.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.10. Body parameters Parameter Type Description body HardwareData schema Table 5.11. HTTP responses HTTP code Reponse body 200 - OK HardwareData schema 201 - Created HardwareData schema 202 - Accepted HardwareData schema 401 - Unauthorized Empty 5.2.3. /apis/metal3.io/v1alpha1/namespaces/{namespace}/hardwaredata/{name} Table 5.12. Global path parameters Parameter Type Description name string name of the HardwareData namespace string object name and auth scope, such as for teams and projects Table 5.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a HardwareData Table 5.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 5.15. Body parameters Parameter Type Description body DeleteOptions schema Table 5.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified HardwareData Table 5.17. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 5.18. HTTP responses HTTP code Reponse body 200 - OK HardwareData schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified HardwareData Table 5.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 5.20. Body parameters Parameter Type Description body Patch schema Table 5.21. HTTP responses HTTP code Reponse body 200 - OK HardwareData schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified HardwareData Table 5.22. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.23. Body parameters Parameter Type Description body HardwareData schema Table 5.24. HTTP responses HTTP code Reponse body 200 - OK HardwareData schema 201 - Created HardwareData schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/provisioning_apis/hardwaredata-metal3-io-v1alpha1
Chapter 2. Configuring an Azure account
Chapter 2. Configuring an Azure account Before you can install OpenShift Container Platform, you must configure a Microsoft Azure account to meet installation requirements. Important All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. 2.1. Azure account limits The OpenShift Container Platform cluster uses a number of Microsoft Azure components, and the default Azure subscription and service limits, quotas, and constraints affect your ability to install OpenShift Container Platform clusters. Important Default limits vary by offer category types, such as Free Trial and Pay-As-You-Go, and by series, such as Dv2, F, and G. For example, the default for Enterprise Agreement subscriptions is 350 cores. Check the limits for your subscription type and if necessary, increase quota limits for your account before you install a default cluster on Azure. The following table summarizes the Azure components whose limits can impact your ability to install and run OpenShift Container Platform clusters. Component Number of components required by default Default Azure limit Description vCPU 44 20 per region A default cluster requires 44 vCPUs, so you must increase the account limit. By default, each cluster creates the following instances: One bootstrap machine, which is removed after installation Three control plane machines Three compute machines Because the bootstrap and control plane machines use Standard_D8s_v3 virtual machines, which use 8 vCPUs, and the compute machines use Standard_D4s_v3 virtual machines, which use 4 vCPUs, a default cluster requires 44 vCPUs. The bootstrap node VM, which uses 8 vCPUs, is used only during installation. To deploy more worker nodes, enable autoscaling, deploy large workloads, or use a different instance type, you must further increase the vCPU limit for your account to ensure that your cluster can deploy the machines that you require. OS Disk 7 Each cluster machine must have a minimum of 100 GB of storage and 300 IOPS. While these are the minimum supported values, faster storage is recommended for production clusters and clusters with intensive workloads. For more information about optimizing storage for performance, see the page titled "Optimizing storage" in the "Scalability and performance" section. VNet 1 1000 per region Each default cluster requires one Virtual Network (VNet), which contains two subnets. Network interfaces 7 65,536 per region Each default cluster requires seven network interfaces. If you create more machines or your deployed workloads create load balancers, your cluster uses more network interfaces. Network security groups 2 5000 Each cluster creates network security groups for each subnet in the VNet. The default cluster creates network security groups for the control plane and for the compute node subnets: controlplane Allows the control plane machines to be reached on port 6443 from anywhere node Allows worker nodes to be reached from the internet on ports 80 and 443 Network load balancers 3 1000 per region Each cluster creates the following load balancers : default Public IP address that load balances requests to ports 80 and 443 across worker machines internal Private IP address that load balances requests to ports 6443 and 22623 across control plane machines external Public IP address that load balances requests to port 6443 across control plane machines If your applications create more Kubernetes LoadBalancer service objects, your cluster uses more load balancers. Public IP addresses 3 Each of the two public load balancers uses a public IP address. The bootstrap machine also uses a public IP address so that you can SSH into the machine to troubleshoot issues during installation. The IP address for the bootstrap node is used only during installation. Private IP addresses 7 The internal load balancer, each of the three control plane machines, and each of the three worker machines each use a private IP address. Spot VM vCPUs (optional) 0 If you configure spot VMs, your cluster must have two spot VM vCPUs for every compute node. 20 per region This is an optional component. To use spot VMs, you must increase the Azure default limit to at least twice the number of compute nodes in your cluster. Note Using spot VMs for control plane nodes is not recommended. Additional resources Optimizing storage . 2.2. Configuring a public DNS zone in Azure To install OpenShift Container Platform, the Microsoft Azure account you use must have a dedicated public hosted DNS zone in your account. This zone must be authoritative for the domain. This service provides cluster DNS resolution and name lookup for external connections to the cluster. Procedure Identify your domain, or subdomain, and registrar. You can transfer an existing domain and registrar or obtain a new one through Azure or another source. Note For more information about purchasing domains through Azure, see Buy a custom domain name for Azure App Service in the Azure documentation. If you are using an existing domain and registrar, migrate its DNS to Azure. See Migrate an active DNS name to Azure App Service in the Azure documentation. Configure DNS for your domain. Follow the steps in the Tutorial: Host your domain in Azure DNS in the Azure documentation to create a public hosted zone for your domain or subdomain, extract the new authoritative name servers, and update the registrar records for the name servers that your domain uses. Use an appropriate root domain, such as openshiftcorp.com , or subdomain, such as clusters.openshiftcorp.com . If you use a subdomain, follow your company's procedures to add its delegation records to the parent domain. 2.3. Increasing Azure account limits To increase an account limit, file a support request on the Azure portal. Note You can increase only one type of quota per support request. Procedure From the Azure portal, click Help + support in the lower left corner. Click New support request and then select the required values: From the Issue type list, select Service and subscription limits (quotas) . From the Subscription list, select the subscription to modify. From the Quota type list, select the quota to increase. For example, select Compute-VM (cores-vCPUs) subscription limit increases to increase the number of vCPUs, which is required to install a cluster. Click : Solutions . On the Problem Details page, provide the required information for your quota increase: Click Provide details and provide the required details in the Quota details window. In the SUPPORT METHOD and CONTACT INFO sections, provide the issue severity and your contact details. Click : Review + create and then click Create . 2.4. Recording the subscription and tenant IDs The installation program requires the subscription and tenant IDs that are associated with your Azure account. You can use the Azure CLI to gather this information. Prerequisites You have installed or updated the Azure CLI . Procedure Log in to the Azure CLI by running the following command: USD az login Ensure that you are using the right subscription: View a list of available subscriptions by running the following command: USD az account list --refresh Example output [ { "cloudName": "AzureCloud", "id": "8xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "isDefault": true, "name": "Subscription Name 1", "state": "Enabled", "tenantId": "6xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "user": { "name": "[email protected]", "type": "user" } }, { "cloudName": "AzureCloud", "id": "9xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "isDefault": false, "name": "Subscription Name 2", "state": "Enabled", "tenantId": "7xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "user": { "name": "[email protected]", "type": "user" } } ] View the details of the active account, and confirm that this is the subscription you want to use, by running the following command: USD az account show Example output { "environmentName": "AzureCloud", "id": "8xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "isDefault": true, "name": "Subscription Name 1", "state": "Enabled", "tenantId": "6xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "user": { "name": "[email protected]", "type": "user" } } If you are not using the right subscription: Change the active subscription by running the following command: USD az account set -s <subscription_id> Verify that you are using the subscription you need by running the following command: USD az account show Example output { "environmentName": "AzureCloud", "id": "9xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "isDefault": true, "name": "Subscription Name 2", "state": "Enabled", "tenantId": "7xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "user": { "name": "[email protected]", "type": "user" } } Record the id and tenantId parameter values from the output. You require these values to install an OpenShift Container Platform cluster. 2.5. Supported identities to access Azure resources An OpenShift Container Platform cluster requires an Azure identity to create and manage Azure resources. As such, you need one of the following types of identities to complete the installation: A service principal A system-assigned managed identity A user-assigned managed identity 2.5.1. Required Azure roles An OpenShift Container Platform cluster requires an Azure identity to create and manage Azure resources. Before you create the identity, verify that your environment meets the following requirements: The Azure account that you use to create the identity is assigned the User Access Administrator and Contributor roles. These roles are required when: Creating a service principal or user-assigned managed identity. Enabling a system-assigned managed identity on a virtual machine. If you are going to use a service principal to complete the installation, verify that the Azure account that you use to create the identity is assigned the microsoft.directory/servicePrincipals/createAsOwner permission in Microsoft Entra ID. To set roles on the Azure portal, see the Manage access to Azure resources using RBAC and the Azure portal in the Azure documentation. 2.5.2. Required Azure permissions for installer-provisioned infrastructure The installation program requires access to an Azure service principal or managed identity with the necessary permissions to deploy the cluster and to maintain its daily operation. These permissions must be granted to the Azure subscription that is associated with the identity. The following options are available to you: You can assign the identity the Contributor and User Access Administrator roles. Assigning these roles is the quickest way to grant all of the required permissions. For more information about assigning roles, see the Azure documentation for managing access to Azure resources using the Azure portal . If your organization's security policies require a more restrictive set of permissions, you can create a custom role with the necessary permissions. The following permissions are required for creating an OpenShift Container Platform cluster on Microsoft Azure. Example 2.1. Required permissions for creating authorization resources Microsoft.Authorization/policies/audit/action Microsoft.Authorization/policies/auditIfNotExists/action Microsoft.Authorization/roleAssignments/read Microsoft.Authorization/roleAssignments/write Example 2.2. Required permissions for creating compute resources Microsoft.Compute/availabilitySets/read Microsoft.Compute/availabilitySets/write Microsoft.Compute/disks/beginGetAccess/action Microsoft.Compute/disks/delete Microsoft.Compute/disks/read Microsoft.Compute/disks/write Microsoft.Compute/galleries/images/read Microsoft.Compute/galleries/images/versions/read Microsoft.Compute/galleries/images/versions/write Microsoft.Compute/galleries/images/write Microsoft.Compute/galleries/read Microsoft.Compute/galleries/write Microsoft.Compute/snapshots/read Microsoft.Compute/snapshots/write Microsoft.Compute/snapshots/delete Microsoft.Compute/virtualMachines/delete Microsoft.Compute/virtualMachines/powerOff/action Microsoft.Compute/virtualMachines/read Microsoft.Compute/virtualMachines/write Example 2.3. Required permissions for creating identity management resources Microsoft.ManagedIdentity/userAssignedIdentities/assign/action Microsoft.ManagedIdentity/userAssignedIdentities/read Microsoft.ManagedIdentity/userAssignedIdentities/write Example 2.4. Required permissions for creating network resources Microsoft.Network/dnsZones/A/write Microsoft.Network/dnsZones/CNAME/write Microsoft.Network/dnszones/CNAME/read Microsoft.Network/dnszones/read Microsoft.Network/loadBalancers/backendAddressPools/join/action Microsoft.Network/loadBalancers/backendAddressPools/read Microsoft.Network/loadBalancers/backendAddressPools/write Microsoft.Network/loadBalancers/read Microsoft.Network/loadBalancers/write Microsoft.Network/networkInterfaces/delete Microsoft.Network/networkInterfaces/join/action Microsoft.Network/networkInterfaces/read Microsoft.Network/networkInterfaces/write Microsoft.Network/networkSecurityGroups/join/action Microsoft.Network/networkSecurityGroups/read Microsoft.Network/networkSecurityGroups/securityRules/delete Microsoft.Network/networkSecurityGroups/securityRules/read Microsoft.Network/networkSecurityGroups/securityRules/write Microsoft.Network/networkSecurityGroups/write Microsoft.Network/privateDnsZones/A/read Microsoft.Network/privateDnsZones/A/write Microsoft.Network/privateDnsZones/A/delete Microsoft.Network/privateDnsZones/SOA/read Microsoft.Network/privateDnsZones/read Microsoft.Network/privateDnsZones/virtualNetworkLinks/read Microsoft.Network/privateDnsZones/virtualNetworkLinks/write Microsoft.Network/privateDnsZones/write Microsoft.Network/publicIPAddresses/delete Microsoft.Network/publicIPAddresses/join/action Microsoft.Network/publicIPAddresses/read Microsoft.Network/publicIPAddresses/write Microsoft.Network/virtualNetworks/join/action Microsoft.Network/virtualNetworks/read Microsoft.Network/virtualNetworks/subnets/join/action Microsoft.Network/virtualNetworks/subnets/read Microsoft.Network/virtualNetworks/subnets/write Microsoft.Network/virtualNetworks/write Note The following permissions are not required to create the private OpenShift Container Platform cluster on Azure. Microsoft.Network/dnsZones/A/write Microsoft.Network/dnsZones/CNAME/write Microsoft.Network/dnszones/CNAME/read Microsoft.Network/dnszones/read Example 2.5. Required permissions for checking the health of resources Microsoft.Resourcehealth/healthevent/Activated/action Microsoft.Resourcehealth/healthevent/InProgress/action Microsoft.Resourcehealth/healthevent/Pending/action Microsoft.Resourcehealth/healthevent/Resolved/action Microsoft.Resourcehealth/healthevent/Updated/action Example 2.6. Required permissions for creating a resource group Microsoft.Resources/subscriptions/resourceGroups/read Microsoft.Resources/subscriptions/resourcegroups/write Example 2.7. Required permissions for creating resource tags Microsoft.Resources/tags/write Example 2.8. Required permissions for creating storage resources Microsoft.Storage/storageAccounts/blobServices/read Microsoft.Storage/storageAccounts/blobServices/containers/write Microsoft.Storage/storageAccounts/fileServices/read Microsoft.Storage/storageAccounts/fileServices/shares/read Microsoft.Storage/storageAccounts/fileServices/shares/write Microsoft.Storage/storageAccounts/fileServices/shares/delete Microsoft.Storage/storageAccounts/listKeys/action Microsoft.Storage/storageAccounts/read Microsoft.Storage/storageAccounts/write Example 2.9. Optional permissions for creating marketplace virtual machine resources Microsoft.MarketplaceOrdering/offertypes/publishers/offers/plans/agreements/read Microsoft.MarketplaceOrdering/offertypes/publishers/offers/plans/agreements/write Example 2.10. Optional permissions for creating compute resources Microsoft.Compute/availabilitySets/delete Microsoft.Compute/images/read Microsoft.Compute/images/write Microsoft.Compute/images/delete Example 2.11. Optional permissions for enabling user-managed encryption Microsoft.Compute/diskEncryptionSets/read Microsoft.Compute/diskEncryptionSets/write Microsoft.Compute/diskEncryptionSets/delete Microsoft.KeyVault/vaults/read Microsoft.KeyVault/vaults/write Microsoft.KeyVault/vaults/delete Microsoft.KeyVault/vaults/deploy/action Microsoft.KeyVault/vaults/keys/read Microsoft.KeyVault/vaults/keys/write Microsoft.Features/providers/features/register/action Example 2.12. Optional permissions for installing a cluster using the NatGateway outbound type Microsoft.Network/natGateways/read Microsoft.Network/natGateways/write Example 2.13. Optional permissions for installing a private cluster with Azure Network Address Translation (NAT) Microsoft.Network/natGateways/join/action Microsoft.Network/natGateways/read Microsoft.Network/natGateways/write Example 2.14. Optional permissions for installing a private cluster with Azure firewall Microsoft.Network/azureFirewalls/applicationRuleCollections/write Microsoft.Network/azureFirewalls/read Microsoft.Network/azureFirewalls/write Microsoft.Network/routeTables/join/action Microsoft.Network/routeTables/read Microsoft.Network/routeTables/routes/read Microsoft.Network/routeTables/routes/write Microsoft.Network/routeTables/write Microsoft.Network/virtualNetworks/peer/action Microsoft.Network/virtualNetworks/virtualNetworkPeerings/read Microsoft.Network/virtualNetworks/virtualNetworkPeerings/write Example 2.15. Optional permission for running gather bootstrap Microsoft.Compute/virtualMachines/retrieveBootDiagnosticsData/action The following permissions are required for deleting an OpenShift Container Platform cluster on Microsoft Azure. You can use the same permissions to delete a private OpenShift Container Platform cluster on Azure. Example 2.16. Required permissions for deleting authorization resources Microsoft.Authorization/roleAssignments/delete Example 2.17. Required permissions for deleting compute resources Microsoft.Compute/disks/delete Microsoft.Compute/galleries/delete Microsoft.Compute/galleries/images/delete Microsoft.Compute/galleries/images/versions/delete Microsoft.Compute/virtualMachines/delete Example 2.18. Required permissions for deleting identity management resources Microsoft.ManagedIdentity/userAssignedIdentities/delete Example 2.19. Required permissions for deleting network resources Microsoft.Network/dnszones/read Microsoft.Network/dnsZones/A/read Microsoft.Network/dnsZones/A/delete Microsoft.Network/dnsZones/CNAME/read Microsoft.Network/dnsZones/CNAME/delete Microsoft.Network/loadBalancers/delete Microsoft.Network/networkInterfaces/delete Microsoft.Network/networkSecurityGroups/delete Microsoft.Network/privateDnsZones/read Microsoft.Network/privateDnsZones/A/read Microsoft.Network/privateDnsZones/delete Microsoft.Network/privateDnsZones/virtualNetworkLinks/delete Microsoft.Network/publicIPAddresses/delete Microsoft.Network/virtualNetworks/delete Note The following permissions are not required to delete a private OpenShift Container Platform cluster on Azure. Microsoft.Network/dnszones/read Microsoft.Network/dnsZones/A/read Microsoft.Network/dnsZones/A/delete Microsoft.Network/dnsZones/CNAME/read Microsoft.Network/dnsZones/CNAME/delete Example 2.20. Required permissions for checking the health of resources Microsoft.Resourcehealth/healthevent/Activated/action Microsoft.Resourcehealth/healthevent/Resolved/action Microsoft.Resourcehealth/healthevent/Updated/action Example 2.21. Required permissions for deleting a resource group Microsoft.Resources/subscriptions/resourcegroups/delete Example 2.22. Required permissions for deleting storage resources Microsoft.Storage/storageAccounts/delete Microsoft.Storage/storageAccounts/listKeys/action Note To install OpenShift Container Platform on Azure, you must scope the permissions to your subscription. Later, you can re-scope these permissions to the installer created resource group. If the public DNS zone is present in a different resource group, then the network DNS zone related permissions must always be applied to your subscription. By default, the OpenShift Container Platform installation program assigns the Azure identity the Contributor role. You can scope all the permissions to your subscription when deleting an OpenShift Container Platform cluster. 2.5.3. Using Azure managed identities The installation program requires an Azure identity to complete the installation. You can use either a system-assigned or user-assigned managed identity. If you are unable to use a managed identity, you can use a service principal. Procedure If you are using a system-assigned managed identity, enable it on the virtual machine that you will run the installation program from. If you are using a user-assigned managed identity: Assign it to the virtual machine that you will run the installation program from. Record its client ID. You require this value when installing the cluster. For more information about viewing the details of a user-assigned managed identity, see the Microsoft Azure documentation for listing user-assigned managed identities . Verify that the required permissions are assigned to the managed identity. 2.5.4. Creating a service principal The installation program requires an Azure identity to complete the installation. You can use a service principal. If you are unable to use a service principal, you can use a managed identity. Prerequisites You have installed or updated the Azure CLI . You have an Azure subscription ID. If you are not going to assign the Contributor and User Administrator Access roles to the service principal, you have created a custom role with the required Azure permissions. Procedure Create the service principal for your account by running the following command: USD az ad sp create-for-rbac --role <role_name> \ 1 --name <service_principal> \ 2 --scopes /subscriptions/<subscription_id> 3 1 Defines the role name. You can use the Contributor role, or you can specify a custom role which contains the necessary permissions. 2 Defines the service principal name. 3 Specifies the subscription ID. Example output Creating 'Contributor' role assignment under scope '/subscriptions/<subscription_id>' The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli { "appId": "axxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "displayName": <service_principal>", "password": "00000000-0000-0000-0000-000000000000", "tenantId": "8xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" } Record the values of the appId and password parameters from the output. You require these values when installing the cluster. If you applied the Contributor role to your service principal, assign the User Administrator Access role by running the following command: USD az role assignment create --role "User Access Administrator" \ --assignee-object-id USD(az ad sp show --id <appId> --query id -o tsv) 1 --scope /subscriptions/<subscription_id> 2 1 Specify the appId parameter value for your service principal. 2 Specifies the subscription ID. Additional resources About the Cloud Credential Operator 2.6. Supported Azure Marketplace regions Installing a cluster using the Azure Marketplace image is available to customers who purchase the offer in North America and EMEA. While the offer must be purchased in North America or EMEA, you can deploy the cluster to any of the Azure public partitions that OpenShift Container Platform supports. Note Deploying a cluster using the Azure Marketplace image is not supported for the Azure Government regions. 2.7. Supported Azure regions The installation program dynamically generates the list of available Microsoft Azure regions based on your subscription. Supported Azure public regions australiacentral (Australia Central) australiaeast (Australia East) australiasoutheast (Australia South East) brazilsouth (Brazil South) canadacentral (Canada Central) canadaeast (Canada East) centralindia (Central India) centralus (Central US) eastasia (East Asia) eastus (East US) eastus2 (East US 2) francecentral (France Central) germanywestcentral (Germany West Central) israelcentral (Israel Central) italynorth (Italy North) japaneast (Japan East) japanwest (Japan West) koreacentral (Korea Central) koreasouth (Korea South) mexicocentral (Mexico Central) newzealandnorth (New Zealand North) northcentralus (North Central US) northeurope (North Europe) norwayeast (Norway East) polandcentral (Poland Central) qatarcentral (Qatar Central) southafricanorth (South Africa North) southcentralus (South Central US) southeastasia (Southeast Asia) southindia (South India) spaincentral (Spain Central) swedencentral (Sweden Central) switzerlandnorth (Switzerland North) uaenorth (UAE North) uksouth (UK South) ukwest (UK West) westcentralus (West Central US) westeurope (West Europe) westindia (West India) westus (West US) westus2 (West US 2) westus3 (West US 3) Supported Azure Government regions Support for the following Microsoft Azure Government (MAG) regions was added in OpenShift Container Platform version 4.6: usgovtexas (US Gov Texas) usgovvirginia (US Gov Virginia) You can reference all available MAG regions in the Azure documentation . Other provided MAG regions are expected to work with OpenShift Container Platform, but have not been tested. 2.8. steps Install an OpenShift Container Platform cluster on Azure. You can install a customized cluster or quickly install a cluster with default options.
[ "az login", "az account list --refresh", "[ { \"cloudName\": \"AzureCloud\", \"id\": \"8xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \"isDefault\": true, \"name\": \"Subscription Name 1\", \"state\": \"Enabled\", \"tenantId\": \"6xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }, { \"cloudName\": \"AzureCloud\", \"id\": \"9xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \"isDefault\": false, \"name\": \"Subscription Name 2\", \"state\": \"Enabled\", \"tenantId\": \"7xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } } ]", "az account show", "{ \"environmentName\": \"AzureCloud\", \"id\": \"8xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \"isDefault\": true, \"name\": \"Subscription Name 1\", \"state\": \"Enabled\", \"tenantId\": \"6xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }", "az account set -s <subscription_id>", "az account show", "{ \"environmentName\": \"AzureCloud\", \"id\": \"9xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \"isDefault\": true, \"name\": \"Subscription Name 2\", \"state\": \"Enabled\", \"tenantId\": \"7xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }", "az ad sp create-for-rbac --role <role_name> \\ 1 --name <service_principal> \\ 2 --scopes /subscriptions/<subscription_id> 3", "Creating 'Contributor' role assignment under scope '/subscriptions/<subscription_id>' The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli { \"appId\": \"axxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \"displayName\": <service_principal>\", \"password\": \"00000000-0000-0000-0000-000000000000\", \"tenantId\": \"8xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\" }", "az role assignment create --role \"User Access Administrator\" --assignee-object-id USD(az ad sp show --id <appId> --query id -o tsv) 1 --scope /subscriptions/<subscription_id> 2" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_on_azure/installing-azure-account
Chapter 4. Migrating from Fabric8 Maven plugin to Openshift Maven plugin
Chapter 4. Migrating from Fabric8 Maven plugin to Openshift Maven plugin The fabric8-maven-plugin has been completely removed from Fuse 7.11. We recommend that you use the openshift-maven-plugin instead for building and deploying Maven projects in Fuse on OpenShift. Procedure Use the following instructions to update your application so that it can use the openshift-maven plugin. Rename the src/main/fabric8 directories in your applications to src/main/jkube . Locate the org.jboss.redhat-fuse:fabric8-maven-plugin dependency in your project's pom.xml and change it to org.jboss.redhat-fuse:openshift-maven-plugin . See the Sample pom.xml . Check the dependencies. For example, org.arquillian.cube:arquillian-cube-openshift , org.jboss.arquillian.junit:arquillian-junit-container , io.fabric8:kubernetes-assertions are no longer used in our examples and may no longer be needed. You can create some sample tests that can be used to reflect the API changes after the migration. For more information see the sample tests in the Spring Boot Camel quickstart . Additional resources OpenShift Maven plugin .
null
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/migration_guide/migrate-to-openshift-maven-plugin
Chapter 4. Configuring HTTPS Cipher Suites
Chapter 4. Configuring HTTPS Cipher Suites Abstract This chapter explains how to specify the list of cipher suites that are made available to clients and servers for the purpose of establishing HTTPS connections. During a security handshake, the client chooses a cipher suite that matches one of the cipher suites available to the server. 4.1. Supported Cipher Suites Overview A cipher suite is a collection of security algorithms that determine precisely how an SSL/TLS connection is implemented. For example, the SSL/TLS protocol mandates that messages be signed using a message digest algorithm. The choice of digest algorithm, however, is determined by the particular cipher suite being used for the connection. Typically, an application can choose either the MD5 or the SHA digest algorithm. The cipher suites available for SSL/TLS security in Apache CXF depend on the particular JSSE provider that is specified on the endpoint. JCE/JSSE and security providers The Java Cryptography Extension (JCE) and the Java Secure Socket Extension (JSSE) constitute a pluggable framework that allows you to replace the Java security implementation with arbitrary third-party toolkits, known as security providers . SunJSSE provider In practice, the security features of Apache CXF have been tested only with SUN's JSSE provider, which is named SunJSSE . Hence, the SSL/TLS implementation and the list of available cipher suites in Apache CXF are effectively determined by what is available from SUN's JSSE provider. Cipher suites supported by SunJSSE The following cipher suites are supported by SUN's JSSE provider in the J2SE 1.5.0 Java development kit (see also Appendix A of SUN's JSSE Reference Guide ): Standard ciphers: Null encryption, integrity-only ciphers: Anonymous Diffie-Hellman ciphers (no authentication): JSSE reference guide For more information about SUN's JSSE framework, please consult the JSSE Reference Guide at the following location: http://download.oracle.com/javase/1.5.0/docs/guide/security/jsse/JSSERefGuide.html 4.2. Cipher Suite Filters Overview In a typical application, you usually want to restrict the list of available cipher suites to a subset of the ciphers supported by the JSSE provider. Generally, you should use the sec:cipherSuitesFilter element, instead of the sec:cipherSuites element to select the cipher suites you want to use. The sec:cipherSuites element is not recommended for general use, because it has rather non-intuitive semantics: you can use it to require that the loaded security provider supports at least the listed cipher suites. But the security provider that is loaded might support many more cipher suites than the ones that are specified. Hence, when you use the sec:cipherSuites element, it is not clear exactly which cipher suites are supported at run time. Namespaces Table 4.1, "Namespaces Used for Configuring Cipher Suite Filters" shows the XML namespaces that are referenced in this section: Table 4.1. Namespaces Used for Configuring Cipher Suite Filters Prefix Namespace URI http http://cxf.apache.org/transports/http/configuration httpj http://cxf.apache.org/transports/http-jetty/configuration sec http://cxf.apache.org/configuration/security sec:cipherSuitesFilter element You define a cipher suite filter using the sec:cipherSuitesFilter element, which can be a child of either a http:tlsClientParameters element or a httpj:tlsServerParameters element. A typical sec:cipherSuitesFilter element has the outline structure shown in Example 4.1, "Structure of a sec:cipherSuitesFilter Element" . Example 4.1. Structure of a sec:cipherSuitesFilter Element Semantics The following semantic rules apply to the sec:cipherSuitesFilter element: If a sec:cipherSuitesFilter element does not appear in an endpoint's configuration (that is, it is absent from the relevant http:conduit or httpj:engine-factory element), the following default filter is used: If the sec:cipherSuitesFilter element does appear in an endpoint's configuration, all cipher suites are excluded by default. To include cipher suites, add a sec:include child element to the sec:cipherSuitesFilter element. The content of the sec:include element is a regular expression that matches one or more cipher suite names (for example, see the cipher suite names in the section called "Cipher suites supported by SunJSSE" ). To refine the selected set of cipher suites further, you can add a sec:exclude element to the sec:cipherSuitesFilter element. The content of the sec:exclude element is a regular expression that matches zero or more cipher suite names from the currently included set. Note Sometimes it makes sense to explicitly exclude cipher suites that are currently not included, in order to future-proof against accidental inclusion of undesired cipher suites. Regular expression matching The grammar for the regular expressions that appear in the sec:include and sec:exclude elements is defined by the Java regular expression utility, java.util.regex.Pattern . For a detailed description of the grammar, please consult the Java reference guide, http://download.oracle.com/javase/1.5.0/docs/api/java/util/regex/Pattern.html . Client conduit example The following XML configuration shows an example of a client that applies a cipher suite filter to the remote endpoint, { WSDLPortNamespace } PortName . Whenever the client attempts to open an SSL/TLS connection to this endpoint, it restricts the available cipher suites to the set selected by the sec:cipherSuitesFilter element. 4.3. SSL/TLS Protocol Version Overview The versions of the SSL/TLS protocol that are supported by Apache CXF depend on the particular JSSE provider configured. By default, the JSSE provider is configured to be SUN's JSSE provider implementation. Warning If you enable SSL/TLS security, you must ensure that you explicitly disable the SSLv3 protocol, in order to safeguard against the Poodle vulnerability (CVE-2014-3566) . For more details, see Disabling SSLv3 in JBoss Fuse 6.x and JBoss A-MQ 6.x . SSL/TLS protocol versions supported by SunJSSE Table 4.2, "SSL/TLS Protocols Supported by SUN's JSSE Provider" shows the SSL/TLS protocol versions supported by SUN's JSSE provider. Table 4.2. SSL/TLS Protocols Supported by SUN's JSSE Provider Protocol Description SSLv2Hello Do not use! (POODLE security vulnerability) SSLv3 Do not use! (POODLE security vulnerability) TLSv1 Supports TLS version 1 TLSv1.1 Supports TLS version 1.1 (JDK 7 or later) TLSv1.2 Supports TLS version 1.2 (JDK 7 or later) Excluding specific SSL/TLS protocol versions By default, all of the SSL/TLS protocols provided by the JSSE provider are available to the CXF endpoints (except for the SSLv2Hello and SSLv3 protocols, which have been specifically excluded by the CXF runtime since Fuse version 6.2.0, because of the Poodle vulnerability (CVE-2014-3566) ). To exclude specific SSL/TLS protocols, use the sec:excludeProtocols element in the endpoint configuration. You can configure the sec:excludeProtocols element as a child of the httpj:tlsServerParameters element (server side). To exclude all protocols except for TLS version 1.2, configure the sec:excludeProtocols element as follows (assuming you are using JDK 7 or later): Important It is recommended that you always exclude the SSLv2Hello and SSLv3 protocols, to protect against the Poodle vulnerability (CVE-2014-3566) . secureSocketProtocol attribute Both the http:tlsClientParameters element and the httpj:tlsServerParameters element support the secureSocketProtocol attribute, which enables you to specify a particular protocol. The semantics of this attribute are confusing, however: this attribute forces CXF to pick an SSL provider that supports the specified protocol, but it does not restrict the provider to use only the specified protocol . Hence, the endpoint could end up using a protocol that is different from the one specified. For this reason, the recommendation is that you do not use the secureSocketProtocol attribute in your code.
[ "SSL_DHE_DSS_EXPORT_WITH_DES40_CBC_SHA SSL_DHE_DSS_WITH_3DES_EDE_CBC_SHA SSL_DHE_DSS_WITH_DES_CBC_SHA SSL_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA SSL_DHE_RSA_WITH_DES_CBC_SHA SSL_RSA_EXPORT_WITH_DES40_CBC_SHA SSL_RSA_EXPORT_WITH_RC4_40_MD5 SSL_RSA_WITH_3DES_EDE_CBC_SHA SSL_RSA_WITH_DES_CBC_SHA SSL_RSA_WITH_RC4_128_MD5 SSL_RSA_WITH_RC4_128_SHA TLS_DHE_DSS_WITH_AES_128_CBC_SHA TLS_DHE_DSS_WITH_AES_256_CBC_SHA TLS_DHE_RSA_WITH_AES_128_CBC_SHA TLS_DHE_RSA_WITH_AES_256_CBC_SHA TLS_KRB5_EXPORT_WITH_DES_CBC_40_MD5 TLS_KRB5_EXPORT_WITH_DES_CBC_40_SHA TLS_KRB5_EXPORT_WITH_RC4_40_MD5 TLS_KRB5_EXPORT_WITH_RC4_40_SHA TLS_KRB5_WITH_3DES_EDE_CBC_MD5 TLS_KRB5_WITH_3DES_EDE_CBC_SHA TLS_KRB5_WITH_DES_CBC_MD5 TLS_KRB5_WITH_DES_CBC_SHA TLS_KRB5_WITH_RC4_128_MD5 TLS_KRB5_WITH_RC4_128_SHA TLS_RSA_WITH_AES_128_CBC_SHA TLS_RSA_WITH_AES_256_CBC_SHA", "SSL_RSA_WITH_NULL_MD5 SSL_RSA_WITH_NULL_SHA", "SSL_DH_anon_EXPORT_WITH_DES40_CBC_SHA SSL_DH_anon_EXPORT_WITH_RC4_40_MD5 SSL_DH_anon_WITH_3DES_EDE_CBC_SHA SSL_DH_anon_WITH_DES_CBC_SHA SSL_DH_anon_WITH_RC4_128_MD5 TLS_DH_anon_WITH_AES_128_CBC_SHA TLS_DH_anon_WITH_AES_256_CBC_SHA", "<sec:cipherSuitesFilter> <sec:include> RegularExpression </sec:include> <sec:include> RegularExpression </sec:include> <sec:exclude> RegularExpression </sec:exclude> <sec:exclude> RegularExpression </sec:exclude> </sec:cipherSuitesFilter>", "<sec:cipherSuitesFilter> <sec:include>.*_EXPORT_.*</sec:include> <sec:include>.*_EXPORT1024.*</sec:include> <sec:include>.*_DES_.*</sec:include> <sec:include>.*_WITH_NULL_.*</sec:include> </sec:cipherSuitesFilter>", "<beans ... > <http:conduit name=\"{ WSDLPortNamespace } PortName .http-conduit\"> <http:tlsClientParameters> <sec:cipherSuitesFilter> <sec:include>.*_WITH_3DES_.*</sec:include> <sec:include>.*_WITH_DES_.*</sec:include> <sec:exclude>.*_WITH_NULL_.*</sec:exclude> <sec:exclude>.*_DH_anon_.*</sec:exclude> </sec:cipherSuitesFilter> </http:tlsClientParameters> </http:conduit> <bean id=\"cxf\" class=\"org.apache.cxf.bus.CXFBusImpl\"/> </beans>", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <beans ... > <httpj:engine-factory bus=\"cxf\"> <httpj:engine port=\"9001\"> <httpj:tlsServerParameters> <sec:excludeProtocols> <sec:excludeProtocol>SSLv2Hello</sec:excludeProtocol> <sec:excludeProtocol>SSLv3</sec:excludeProtocol> <sec:excludeProtocol>TLSv1</sec:excludeProtocol> <sec:excludeProtocol>TLSv1.1</sec:excludeProtocol> </sec:excludeProtocols> </httpj:tlsServerParameters> </httpj:engine> </httpj:engine-factory> </beans>" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_security_guide/CiphersJava
Chapter 7. Working with clusters
Chapter 7. Working with clusters 7.1. Viewing system event information in an OpenShift Container Platform cluster Events in OpenShift Container Platform are modeled based on events that happen to API objects in an OpenShift Container Platform cluster. 7.1.1. Understanding events Events allow OpenShift Container Platform to record information about real-world events in a resource-agnostic manner. They also allow developers and administrators to consume information about system components in a unified way. 7.1.2. Viewing events using the CLI You can get a list of events in a given project using the CLI. Procedure To view events in a project use the following command: USD oc get events [-n <project>] 1 1 The name of the project. For example: USD oc get events -n openshift-config Example output LAST SEEN TYPE REASON OBJECT MESSAGE 97m Normal Scheduled pod/dapi-env-test-pod Successfully assigned openshift-config/dapi-env-test-pod to ip-10-0-171-202.ec2.internal 97m Normal Pulling pod/dapi-env-test-pod pulling image "gcr.io/google_containers/busybox" 97m Normal Pulled pod/dapi-env-test-pod Successfully pulled image "gcr.io/google_containers/busybox" 97m Normal Created pod/dapi-env-test-pod Created container 9m5s Warning FailedCreatePodSandBox pod/dapi-volume-test-pod Failed create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dapi-volume-test-pod_openshift-config_6bc60c1f-452e-11e9-9140-0eec59c23068_0(748c7a40db3d08c07fb4f9eba774bd5effe5f0d5090a242432a73eee66ba9e22): Multus: Err adding pod to network "openshift-sdn": cannot set "openshift-sdn" ifname to "eth0": no netns: failed to Statfs "/proc/33366/ns/net": no such file or directory 8m31s Normal Scheduled pod/dapi-volume-test-pod Successfully assigned openshift-config/dapi-volume-test-pod to ip-10-0-171-202.ec2.internal To view events in your project from the OpenShift Container Platform console. Launch the OpenShift Container Platform console. Click Home Events and select your project. Move to resource that you want to see events. For example: Home Projects <project-name> <resource-name>. Many objects, such as pods and deployments, have their own Events tab as well, which shows events related to that object. 7.1.3. List of events This section describes the events of OpenShift Container Platform. Table 7.1. Configuration events Name Description FailedValidation Failed pod configuration validation. Table 7.2. Container events Name Description BackOff Back-off restarting failed the container. Created Container created. Failed Pull/Create/Start failed. Killing Killing the container. Started Container started. Preempting Preempting other pods. ExceededGracePeriod Container runtime did not stop the pod within specified grace period. Table 7.3. Health events Name Description Unhealthy Container is unhealthy. Table 7.4. Image events Name Description BackOff Back off Ctr Start, image pull. ErrImageNeverPull The image's NeverPull Policy is violated. Failed Failed to pull the image. InspectFailed Failed to inspect the image. Pulled Successfully pulled the image or the container image is already present on the machine. Pulling Pulling the image. Table 7.5. Image Manager events Name Description FreeDiskSpaceFailed Free disk space failed. InvalidDiskCapacity Invalid disk capacity. Table 7.6. Node events Name Description FailedMount Volume mount failed. HostNetworkNotSupported Host network not supported. HostPortConflict Host/port conflict. KubeletSetupFailed Kubelet setup failed. NilShaper Undefined shaper. NodeNotReady Node is not ready. NodeNotSchedulable Node is not schedulable. NodeReady Node is ready. NodeSchedulable Node is schedulable. NodeSelectorMismatching Node selector mismatch. OutOfDisk Out of disk. Rebooted Node rebooted. Starting Starting kubelet. FailedAttachVolume Failed to attach volume. FailedDetachVolume Failed to detach volume. VolumeResizeFailed Failed to expand/reduce volume. VolumeResizeSuccessful Successfully expanded/reduced volume. FileSystemResizeFailed Failed to expand/reduce file system. FileSystemResizeSuccessful Successfully expanded/reduced file system. FailedUnMount Failed to unmount volume. FailedMapVolume Failed to map a volume. FailedUnmapDevice Failed unmaped device. AlreadyMountedVolume Volume is already mounted. SuccessfulDetachVolume Volume is successfully detached. SuccessfulMountVolume Volume is successfully mounted. SuccessfulUnMountVolume Volume is successfully unmounted. ContainerGCFailed Container garbage collection failed. ImageGCFailed Image garbage collection failed. FailedNodeAllocatableEnforcement Failed to enforce System Reserved Cgroup limit. NodeAllocatableEnforced Enforced System Reserved Cgroup limit. UnsupportedMountOption Unsupported mount option. SandboxChanged Pod sandbox changed. FailedCreatePodSandBox Failed to create pod sandbox. FailedPodSandBoxStatus Failed pod sandbox status. Table 7.7. Pod worker events Name Description FailedSync Pod sync failed. Table 7.8. System Events Name Description SystemOOM There is an OOM (out of memory) situation on the cluster. Table 7.9. Pod events Name Description FailedKillPod Failed to stop a pod. FailedCreatePodContainer Failed to create a pod container. Failed Failed to make pod data directories. NetworkNotReady Network is not ready. FailedCreate Error creating: <error-msg> . SuccessfulCreate Created pod: <pod-name> . FailedDelete Error deleting: <error-msg> . SuccessfulDelete Deleted pod: <pod-id> . Table 7.10. Horizontal Pod AutoScaler events Name Description SelectorRequired Selector is required. InvalidSelector Could not convert selector into a corresponding internal selector object. FailedGetObjectMetric HPA was unable to compute the replica count. InvalidMetricSourceType Unknown metric source type. ValidMetricFound HPA was able to successfully calculate a replica count. FailedConvertHPA Failed to convert the given HPA. FailedGetScale HPA controller was unable to get the target's current scale. SucceededGetScale HPA controller was able to get the target's current scale. FailedComputeMetricsReplicas Failed to compute desired number of replicas based on listed metrics. FailedRescale New size: <size> ; reason: <msg> ; error: <error-msg> . SuccessfulRescale New size: <size> ; reason: <msg> . FailedUpdateStatus Failed to update status. Table 7.11. Network events (openshift-sdn) Name Description Starting Starting OpenShift-SDN. NetworkFailed The pod's network interface has been lost and the pod will be stopped. Table 7.12. Network events (kube-proxy) Name Description NeedPods The service-port <serviceName>:<port> needs pods. Table 7.13. Volume events Name Description FailedBinding There are no persistent volumes available and no storage class is set. VolumeMismatch Volume size or class is different from what is requested in claim. VolumeFailedRecycle Error creating recycler pod. VolumeRecycled Occurs when volume is recycled. RecyclerPod Occurs when pod is recycled. VolumeDelete Occurs when volume is deleted. VolumeFailedDelete Error when deleting the volume. ExternalProvisioning Occurs when volume for the claim is provisioned either manually or via external software. ProvisioningFailed Failed to provision volume. ProvisioningCleanupFailed Error cleaning provisioned volume. ProvisioningSucceeded Occurs when the volume is provisioned successfully. WaitForFirstConsumer Delay binding until pod scheduling. Table 7.14. Lifecycle hooks Name Description FailedPostStartHook Handler failed for pod start. FailedPreStopHook Handler failed for pre-stop. UnfinishedPreStopHook Pre-stop hook unfinished. Table 7.15. Deployments Name Description DeploymentCancellationFailed Failed to cancel deployment. DeploymentCancelled Canceled deployment. DeploymentCreated Created new replication controller. IngressIPRangeFull No available Ingress IP to allocate to service. Table 7.16. Scheduler events Name Description FailedScheduling Failed to schedule pod: <pod-namespace>/<pod-name> . This event is raised for multiple reasons, for example: AssumePodVolumes failed, Binding rejected etc. Preempted By <preemptor-namespace>/<preemptor-name> on node <node-name> . Scheduled Successfully assigned <pod-name> to <node-name> . Table 7.17. Daemon set events Name Description SelectingAll This daemon set is selecting all pods. A non-empty selector is required. FailedPlacement Failed to place pod on <node-name> . FailedDaemonPod Found failed daemon pod <pod-name> on node <node-name> , will try to kill it. Table 7.18. LoadBalancer service events Name Description CreatingLoadBalancerFailed Error creating load balancer. DeletingLoadBalancer Deleting load balancer. EnsuringLoadBalancer Ensuring load balancer. EnsuredLoadBalancer Ensured load balancer. UnAvailableLoadBalancer There are no available nodes for LoadBalancer service. LoadBalancerSourceRanges Lists the new LoadBalancerSourceRanges . For example, <old-source-range> <new-source-range> . LoadbalancerIP Lists the new IP address. For example, <old-ip> <new-ip> . ExternalIP Lists external IP address. For example, Added: <external-ip> . UID Lists the new UID. For example, <old-service-uid> <new-service-uid> . ExternalTrafficPolicy Lists the new ExternalTrafficPolicy . For example, <old-policy> <new-policy> . HealthCheckNodePort Lists the new HealthCheckNodePort . For example, <old-node-port> new-node-port> . UpdatedLoadBalancer Updated load balancer with new hosts. LoadBalancerUpdateFailed Error updating load balancer with new hosts. DeletingLoadBalancer Deleting load balancer. DeletingLoadBalancerFailed Error deleting load balancer. DeletedLoadBalancer Deleted load balancer. 7.2. Estimating the number of pods your OpenShift Container Platform nodes can hold As a cluster administrator, you can use the cluster capacity tool to view the number of pods that can be scheduled to increase the current resources before they become exhausted, and to ensure any future pods can be scheduled. This capacity comes from an individual node host in a cluster, and includes CPU, memory, disk space, and others. 7.2.1. Understanding the OpenShift Container Platform cluster capacity tool The cluster capacity tool simulates a sequence of scheduling decisions to determine how many instances of an input pod can be scheduled on the cluster before it is exhausted of resources to provide a more accurate estimation. Note The remaining allocatable capacity is a rough estimation, because it does not count all of the resources being distributed among nodes. It analyzes only the remaining resources and estimates the available capacity that is still consumable in terms of a number of instances of a pod with given requirements that can be scheduled in a cluster. Also, pods might only have scheduling support on particular sets of nodes based on its selection and affinity criteria. As a result, the estimation of which remaining pods a cluster can schedule can be difficult. You can run the cluster capacity analysis tool as a stand-alone utility from the command line, or as a job in a pod inside an OpenShift Container Platform cluster. Running it as job inside of a pod enables you to run it multiple times without intervention. 7.2.2. Running the cluster capacity tool on the command line You can run the OpenShift Container Platform cluster capacity tool from the command line to estimate the number of pods that can be scheduled onto your cluster. Prerequisites Run the OpenShift Cluster Capacity Tool , which is available as a container image from the Red Hat Ecosystem Catalog. Create a sample Pod spec file, which the tool uses for estimating resource usage. The podspec specifies its resource requirements as limits or requests . The cluster capacity tool takes the pod's resource requirements into account for its estimation analysis. An example of the Pod spec input is: apiVersion: v1 kind: Pod metadata: name: small-pod labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v4 imagePullPolicy: Always resources: limits: cpu: 150m memory: 100Mi requests: cpu: 150m memory: 100Mi Procedure To use the cluster capacity tool on the command line: From the terminal, log in to the Red Hat Registry: USD podman login registry.redhat.io Pull the cluster capacity tool image: USD podman pull registry.redhat.io/openshift4/ose-cluster-capacity Run the cluster capacity tool: USD podman run -v USDHOME/.kube:/kube:Z -v USD(pwd):/cc:Z ose-cluster-capacity \ /bin/cluster-capacity --kubeconfig /kube/config --podspec /cc/pod-spec.yaml \ --verbose 1 1 You can also add the --verbose option to output a detailed description of how many pods can be scheduled on each node in the cluster. Example output small-pod pod requirements: - CPU: 150m - Memory: 100Mi The cluster can schedule 88 instance(s) of the pod small-pod. Termination reason: Unschedulable: 0/5 nodes are available: 2 Insufficient cpu, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate. Pod distribution among nodes: small-pod - 192.168.124.214: 45 instance(s) - 192.168.124.120: 43 instance(s) In the above example, the number of estimated pods that can be scheduled onto the cluster is 88. 7.2.3. Running the cluster capacity tool as a job inside a pod Running the cluster capacity tool as a job inside of a pod has the advantage of being able to be run multiple times without needing user intervention. Running the cluster capacity tool as a job involves using a ConfigMap object. Prerequisites Download and install the cluster capacity tool . Procedure To run the cluster capacity tool: Create the cluster role: USD cat << EOF| oc create -f - Example output kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: cluster-capacity-role rules: - apiGroups: [""] resources: ["pods", "nodes", "persistentvolumeclaims", "persistentvolumes", "services", "replicationcontrollers"] verbs: ["get", "watch", "list"] - apiGroups: ["apps"] resources: ["replicasets", "statefulsets"] verbs: ["get", "watch", "list"] - apiGroups: ["policy"] resources: ["poddisruptionbudgets"] verbs: ["get", "watch", "list"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "watch", "list"] EOF Create the service account: USD oc create sa cluster-capacity-sa Add the role to the service account: USD oc adm policy add-cluster-role-to-user cluster-capacity-role \ system:serviceaccount:default:cluster-capacity-sa Define and create the Pod spec: apiVersion: v1 kind: Pod metadata: name: small-pod labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v4 imagePullPolicy: Always resources: limits: cpu: 150m memory: 100Mi requests: cpu: 150m memory: 100Mi The cluster capacity analysis is mounted in a volume using a ConfigMap object named cluster-capacity-configmap to mount input pod spec file pod.yaml into a volume test-volume at the path /test-pod . If you haven't created a ConfigMap object, create one before creating the job: Create the job using the below example of a job specification file: apiVersion: batch/v1 kind: Job metadata: name: cluster-capacity-job spec: parallelism: 1 completions: 1 template: metadata: name: cluster-capacity-pod spec: containers: - name: cluster-capacity image: openshift/origin-cluster-capacity imagePullPolicy: "Always" volumeMounts: - mountPath: /test-pod name: test-volume env: - name: CC_INCLUSTER 1 value: "true" command: - "/bin/sh" - "-ec" - | /bin/cluster-capacity --podspec=/test-pod/pod.yaml --verbose restartPolicy: "Never" serviceAccountName: cluster-capacity-sa volumes: - name: test-volume configMap: name: cluster-capacity-configmap 1 A required environment variable letting the cluster capacity tool know that it is running inside a cluster as a pod. The pod.yaml key of the ConfigMap object is the same as the Pod spec file name, though it is not required. By doing this, the input pod spec file can be accessed inside the pod as /test-pod/pod.yaml . Run the cluster capacity image as a job in a pod: USD oc create -f cluster-capacity-job.yaml Check the job logs to find the number of pods that can be scheduled in the cluster: USD oc logs jobs/cluster-capacity-job Example output small-pod pod requirements: - CPU: 150m - Memory: 100Mi The cluster can schedule 52 instance(s) of the pod small-pod. Termination reason: Unschedulable: No nodes are available that match all of the following predicates:: Insufficient cpu (2). Pod distribution among nodes: small-pod - 192.168.124.214: 26 instance(s) - 192.168.124.120: 26 instance(s) 7.3. Restrict resource consumption with limit ranges By default, containers run with unbounded compute resources on an OpenShift Container Platform cluster. With limit ranges, you can restrict resource consumption for specific objects in a project: pods and containers: You can set minimum and maximum requirements for CPU and memory for pods and their containers. Image streams: You can set limits on the number of images and tags in an ImageStream object. Images: You can limit the size of images that can be pushed to an internal registry. Persistent volume claims (PVC): You can restrict the size of the PVCs that can be requested. If a pod does not meet the constraints imposed by the limit range, the pod cannot be created in the namespace. 7.3.1. About limit ranges A limit range, defined by a LimitRange object, restricts resource consumption in a project. In the project you can set specific resource limits for a pod, container, image, image stream, or persistent volume claim (PVC). All requests to create and modify resources are evaluated against each LimitRange object in the project. If the resource violates any of the enumerated constraints, the resource is rejected. The following shows a limit range object for all components: pod, container, image, image stream, or PVC. You can configure limits for any or all of these components in the same object. You create a different limit range object for each project where you want to control resources. Sample limit range object for a container apiVersion: "v1" kind: "LimitRange" metadata: name: "resource-limits" spec: limits: - type: "Container" max: cpu: "2" memory: "1Gi" min: cpu: "100m" memory: "4Mi" default: cpu: "300m" memory: "200Mi" defaultRequest: cpu: "200m" memory: "100Mi" maxLimitRequestRatio: cpu: "10" 7.3.1.1. About component limits The following examples show limit range parameters for each component. The examples are broken out for clarity. You can create a single LimitRange object for any or all components as necessary. 7.3.1.1.1. Container limits A limit range allows you to specify the minimum and maximum CPU and memory that each container in a pod can request for a specific project. If a container is created in the project, the container CPU and memory requests in the Pod spec must comply with the values set in the LimitRange object. If not, the pod does not get created. The container CPU or memory request and limit must be greater than or equal to the min resource constraint for containers that are specified in the LimitRange object. The container CPU or memory request and limit must be less than or equal to the max resource constraint for containers that are specified in the LimitRange object. If the LimitRange object defines a max CPU, you do not need to define a CPU request value in the Pod spec. But you must specify a CPU limit value that satisfies the maximum CPU constraint specified in the limit range. The ratio of the container limits to requests must be less than or equal to the maxLimitRequestRatio value for containers that is specified in the LimitRange object. If the LimitRange object defines a maxLimitRequestRatio constraint, any new containers must have both a request and a limit value. OpenShift Container Platform calculates the limit-to-request ratio by dividing the limit by the request . This value should be a non-negative integer greater than 1. For example, if a container has cpu: 500 in the limit value, and cpu: 100 in the request value, the limit-to-request ratio for cpu is 5 . This ratio must be less than or equal to the maxLimitRequestRatio . If the Pod spec does not specify a container resource memory or limit, the default or defaultRequest CPU and memory values for containers specified in the limit range object are assigned to the container. Container LimitRange object definition apiVersion: "v1" kind: "LimitRange" metadata: name: "resource-limits" 1 spec: limits: - type: "Container" max: cpu: "2" 2 memory: "1Gi" 3 min: cpu: "100m" 4 memory: "4Mi" 5 default: cpu: "300m" 6 memory: "200Mi" 7 defaultRequest: cpu: "200m" 8 memory: "100Mi" 9 maxLimitRequestRatio: cpu: "10" 10 1 The name of the LimitRange object. 2 The maximum amount of CPU that a single container in a pod can request. 3 The maximum amount of memory that a single container in a pod can request. 4 The minimum amount of CPU that a single container in a pod can request. 5 The minimum amount of memory that a single container in a pod can request. 6 The default amount of CPU that a container can use if not specified in the Pod spec. 7 The default amount of memory that a container can use if not specified in the Pod spec. 8 The default amount of CPU that a container can request if not specified in the Pod spec. 9 The default amount of memory that a container can request if not specified in the Pod spec. 10 The maximum limit-to-request ratio for a container. 7.3.1.1.2. Pod limits A limit range allows you to specify the minimum and maximum CPU and memory limits for all containers across a pod in a given project. To create a container in the project, the container CPU and memory requests in the Pod spec must comply with the values set in the LimitRange object. If not, the pod does not get created. If the Pod spec does not specify a container resource memory or limit, the default or defaultRequest CPU and memory values for containers specified in the limit range object are assigned to the container. Across all containers in a pod, the following must hold true: The container CPU or memory request and limit must be greater than or equal to the min resource constraints for pods that are specified in the LimitRange object. The container CPU or memory request and limit must be less than or equal to the max resource constraints for pods that are specified in the LimitRange object. The ratio of the container limits to requests must be less than or equal to the maxLimitRequestRatio constraint specified in the LimitRange object. Pod LimitRange object definition apiVersion: "v1" kind: "LimitRange" metadata: name: "resource-limits" 1 spec: limits: - type: "Pod" max: cpu: "2" 2 memory: "1Gi" 3 min: cpu: "200m" 4 memory: "6Mi" 5 maxLimitRequestRatio: cpu: "10" 6 1 The name of the limit range object. 2 The maximum amount of CPU that a pod can request across all containers. 3 The maximum amount of memory that a pod can request across all containers. 4 The minimum amount of CPU that a pod can request across all containers. 5 The minimum amount of memory that a pod can request across all containers. 6 The maximum limit-to-request ratio for a container. 7.3.1.1.3. Image limits A LimitRange object allows you to specify the maximum size of an image that can be pushed to an internal registry. When pushing images to an internal registry, the following must hold true: The size of the image must be less than or equal to the max size for images that is specified in the LimitRange object. Image LimitRange object definition apiVersion: "v1" kind: "LimitRange" metadata: name: "resource-limits" 1 spec: limits: - type: openshift.io/Image max: storage: 1Gi 2 1 The name of the LimitRange object. 2 The maximum size of an image that can be pushed to an internal registry. Note To prevent blobs that exceed the limit from being uploaded to the registry, the registry must be configured to enforce quotas. Warning The image size is not always available in the manifest of an uploaded image. This is especially the case for images built with Docker 1.10 or higher and pushed to a v2 registry. If such an image is pulled with an older Docker daemon, the image manifest is converted by the registry to schema v1 lacking all the size information. No storage limit set on images prevent it from being uploaded. The issue is being addressed. 7.3.1.1.4. Image stream limits A LimitRange object allows you to specify limits for image streams. For each image stream, the following must hold true: The number of image tags in an ImageStream specification must be less than or equal to the openshift.io/image-tags constraint in the LimitRange object. The number of unique references to images in an ImageStream specification must be less than or equal to the openshift.io/images constraint in the limit range object. Imagestream LimitRange object definition apiVersion: "v1" kind: "LimitRange" metadata: name: "resource-limits" 1 spec: limits: - type: openshift.io/ImageStream max: openshift.io/image-tags: 20 2 openshift.io/images: 30 3 1 The name of the LimitRange object. 2 The maximum number of unique image tags in the imagestream.spec.tags parameter in imagestream spec. 3 The maximum number of unique image references in the imagestream.status.tags parameter in the imagestream spec. The openshift.io/image-tags resource represents unique image references. Possible references are an ImageStreamTag , an ImageStreamImage and a DockerImage . Tags can be created using the oc tag and oc import-image commands. No distinction is made between internal and external references. However, each unique reference tagged in an ImageStream specification is counted just once. It does not restrict pushes to an internal container image registry in any way, but is useful for tag restriction. The openshift.io/images resource represents unique image names recorded in image stream status. It allows for restriction of a number of images that can be pushed to the internal registry. Internal and external references are not distinguished. 7.3.1.1.5. Persistent volume claim limits A LimitRange object allows you to restrict the storage requested in a persistent volume claim (PVC). Across all persistent volume claims in a project, the following must hold true: The resource request in a persistent volume claim (PVC) must be greater than or equal the min constraint for PVCs that is specified in the LimitRange object. The resource request in a persistent volume claim (PVC) must be less than or equal the max constraint for PVCs that is specified in the LimitRange object. PVC LimitRange object definition apiVersion: "v1" kind: "LimitRange" metadata: name: "resource-limits" 1 spec: limits: - type: "PersistentVolumeClaim" min: storage: "2Gi" 2 max: storage: "50Gi" 3 1 The name of the LimitRange object. 2 The minimum amount of storage that can be requested in a persistent volume claim. 3 The maximum amount of storage that can be requested in a persistent volume claim. 7.3.2. Creating a Limit Range To apply a limit range to a project: Create a LimitRange object with your required specifications: apiVersion: "v1" kind: "LimitRange" metadata: name: "resource-limits" 1 spec: limits: - type: "Pod" 2 max: cpu: "2" memory: "1Gi" min: cpu: "200m" memory: "6Mi" - type: "Container" 3 max: cpu: "2" memory: "1Gi" min: cpu: "100m" memory: "4Mi" default: 4 cpu: "300m" memory: "200Mi" defaultRequest: 5 cpu: "200m" memory: "100Mi" maxLimitRequestRatio: 6 cpu: "10" - type: openshift.io/Image 7 max: storage: 1Gi - type: openshift.io/ImageStream 8 max: openshift.io/image-tags: 20 openshift.io/images: 30 - type: "PersistentVolumeClaim" 9 min: storage: "2Gi" max: storage: "50Gi" 1 Specify a name for the LimitRange object. 2 To set limits for a pod, specify the minimum and maximum CPU and memory requests as needed. 3 To set limits for a container, specify the minimum and maximum CPU and memory requests as needed. 4 Optional. For a container, specify the default amount of CPU or memory that a container can use, if not specified in the Pod spec. 5 Optional. For a container, specify the default amount of CPU or memory that a container can request, if not specified in the Pod spec. 6 Optional. For a container, specify the maximum limit-to-request ratio that can be specified in the Pod spec. 7 To set limits for an Image object, set the maximum size of an image that can be pushed to an internal registry. 8 To set limits for an image stream, set the maximum number of image tags and references that can be in the ImageStream object file, as needed. 9 To set limits for a persistent volume claim, set the minimum and maximum amount of storage that can be requested. Create the object: 1 Specify the name of the YAML file you created and the project where you want the limits to apply. 7.3.3. Viewing a limit You can view any limits defined in a project by navigating in the web console to the project's Quota page. You can also use the CLI to view limit range details: Get the list of LimitRange object defined in the project. For example, for a project called demoproject : Describe the LimitRange object you are interested in, for example the resource-limits limit range: 7.3.4. Deleting a Limit Range To remove any active LimitRange object to no longer enforce the limits in a project: Run the following command: 7.4. Configuring cluster memory to meet container memory and risk requirements As a cluster administrator, you can help your clusters operate efficiently through managing application memory by: Determining the memory and risk requirements of a containerized application component and configuring the container memory parameters to suit those requirements. Configuring containerized application runtimes (for example, OpenJDK) to adhere optimally to the configured container memory parameters. Diagnosing and resolving memory-related error conditions associated with running in a container. 7.4.1. Understanding managing application memory It is recommended to fully read the overview of how OpenShift Container Platform manages Compute Resources before proceeding. For each kind of resource (memory, CPU, storage), OpenShift Container Platform allows optional request and limit values to be placed on each container in a pod. Note the following about memory requests and memory limits: Memory request The memory request value, if specified, influences the OpenShift Container Platform scheduler. The scheduler considers the memory request when scheduling a container to a node, then fences off the requested memory on the chosen node for the use of the container. If a node's memory is exhausted, OpenShift Container Platform prioritizes evicting its containers whose memory usage most exceeds their memory request. In serious cases of memory exhaustion, the node OOM killer may select and kill a process in a container based on a similar metric. The cluster administrator can assign quota or assign default values for the memory request value. The cluster administrator can override the memory request values that a developer specifies, to manage cluster overcommit. Memory limit The memory limit value, if specified, provides a hard limit on the memory that can be allocated across all the processes in a container. If the memory allocated by all of the processes in a container exceeds the memory limit, the node Out of Memory (OOM) killer will immediately select and kill a process in the container. If both memory request and limit are specified, the memory limit value must be greater than or equal to the memory request. The cluster administrator can assign quota or assign default values for the memory limit value. The minimum memory limit is 12 MB. If a container fails to start due to a Cannot allocate memory pod event, the memory limit is too low. Either increase or remove the memory limit. Removing the limit allows pods to consume unbounded node resources. 7.4.1.1. Managing application memory strategy The steps for sizing application memory on OpenShift Container Platform are as follows: Determine expected container memory usage Determine expected mean and peak container memory usage, empirically if necessary (for example, by separate load testing). Remember to consider all the processes that may potentially run in parallel in the container: for example, does the main application spawn any ancillary scripts? Determine risk appetite Determine risk appetite for eviction. If the risk appetite is low, the container should request memory according to the expected peak usage plus a percentage safety margin. If the risk appetite is higher, it may be more appropriate to request memory according to the expected mean usage. Set container memory request Set container memory request based on the above. The more accurately the request represents the application memory usage, the better. If the request is too high, cluster and quota usage will be inefficient. If the request is too low, the chances of application eviction increase. Set container memory limit, if required Set container memory limit, if required. Setting a limit has the effect of immediately killing a container process if the combined memory usage of all processes in the container exceeds the limit, and is therefore a mixed blessing. On the one hand, it may make unanticipated excess memory usage obvious early ("fail fast"); on the other hand it also terminates processes abruptly. Note that some OpenShift Container Platform clusters may require a limit value to be set; some may override the request based on the limit; and some application images rely on a limit value being set as this is easier to detect than a request value. If the memory limit is set, it should not be set to less than the expected peak container memory usage plus a percentage safety margin. Ensure application is tuned Ensure application is tuned with respect to configured request and limit values, if appropriate. This step is particularly relevant to applications which pool memory, such as the JVM. The rest of this page discusses this. Additional resources Understanding compute resources and containers 7.4.2. Understanding OpenJDK settings for OpenShift Container Platform The default OpenJDK settings do not work well with containerized environments. As a result, some additional Java memory settings must always be provided whenever running the OpenJDK in a container. The JVM memory layout is complex, version dependent, and describing it in detail is beyond the scope of this documentation. However, as a starting point for running OpenJDK in a container, at least the following three memory-related tasks are key: Overriding the JVM maximum heap size. Encouraging the JVM to release unused memory to the operating system, if appropriate. Ensuring all JVM processes within a container are appropriately configured. Optimally tuning JVM workloads for running in a container is beyond the scope of this documentation, and may involve setting multiple additional JVM options. 7.4.2.1. Understanding how to override the JVM maximum heap size For many Java workloads, the JVM heap is the largest single consumer of memory. Currently, the OpenJDK defaults to allowing up to 1/4 (1/ -XX:MaxRAMFraction ) of the compute node's memory to be used for the heap, regardless of whether the OpenJDK is running in a container or not. It is therefore essential to override this behavior, especially if a container memory limit is also set. There are at least two ways the above can be achieved: If the container memory limit is set and the experimental options are supported by the JVM, set -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap . Note The UseCGroupMemoryLimitForHeap option has been removed in JDK 11. Use -XX:+UseContainerSupport instead. This sets -XX:MaxRAM to the container memory limit, and the maximum heap size ( -XX:MaxHeapSize / -Xmx ) to 1/ -XX:MaxRAMFraction (1/4 by default). Directly override one of -XX:MaxRAM , -XX:MaxHeapSize or -Xmx . This option involves hard-coding a value, but has the advantage of allowing a safety margin to be calculated. 7.4.2.2. Understanding how to encourage the JVM to release unused memory to the operating system By default, the OpenJDK does not aggressively return unused memory to the operating system. This may be appropriate for many containerized Java workloads, but notable exceptions include workloads where additional active processes co-exist with a JVM within a container, whether those additional processes are native, additional JVMs, or a combination of the two. The OpenShift Container Platform Jenkins maven slave image uses the following JVM arguments to encourage the JVM to release unused memory to the operating system: -XX:+UseParallelGC -XX:MinHeapFreeRatio=5 -XX:MaxHeapFreeRatio=10 -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90. These arguments are intended to return heap memory to the operating system whenever allocated memory exceeds 110% of in-use memory ( -XX:MaxHeapFreeRatio ), spending up to 20% of CPU time in the garbage collector ( -XX:GCTimeRatio ). At no time will the application heap allocation be less than the initial heap allocation (overridden by -XX:InitialHeapSize / -Xms ). Detailed additional information is available Tuning Java's footprint in OpenShift (Part 1) , Tuning Java's footprint in OpenShift (Part 2) , and at OpenJDK and Containers . 7.4.2.3. Understanding how to ensure all JVM processes within a container are appropriately configured In the case that multiple JVMs run in the same container, it is essential to ensure that they are all configured appropriately. For many workloads it will be necessary to grant each JVM a percentage memory budget, leaving a perhaps substantial additional safety margin. Many Java tools use different environment variables ( JAVA_OPTS , GRADLE_OPTS , MAVEN_OPTS , and so on) to configure their JVMs and it can be challenging to ensure that the right settings are being passed to the right JVM. The JAVA_TOOL_OPTIONS environment variable is always respected by the OpenJDK, and values specified in JAVA_TOOL_OPTIONS will be overridden by other options specified on the JVM command line. By default, to ensure that these options are used by default for all JVM workloads run in the slave image, the OpenShift Container Platform Jenkins maven slave image sets: JAVA_TOOL_OPTIONS="-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -Dsun.zip.disableMemoryMapping=true" Note The UseCGroupMemoryLimitForHeap option has been removed in JDK 11. Use -XX:+UseContainerSupport instead. This does not guarantee that additional options are not required, but is intended to be a helpful starting point. 7.4.3. Finding the memory request and limit from within a pod An application wishing to dynamically discover its memory request and limit from within a pod should use the Downward API. Procedure Configure the pod to add the MEMORY_REQUEST and MEMORY_LIMIT stanzas: apiVersion: v1 kind: Pod metadata: name: test spec: containers: - name: test image: fedora:latest command: - sleep - "3600" env: - name: MEMORY_REQUEST 1 valueFrom: resourceFieldRef: containerName: test resource: requests.memory - name: MEMORY_LIMIT 2 valueFrom: resourceFieldRef: containerName: test resource: limits.memory resources: requests: memory: 384Mi limits: memory: 512Mi 1 Add this stanza to discover the application memory request value. 2 Add this stanza to discover the application memory limit value. Create the pod: USD oc create -f <file-name>.yaml Access the pod using a remote shell: USD oc rsh test Check that the requested values were applied: USD env | grep MEMORY | sort Example output MEMORY_LIMIT=536870912 MEMORY_REQUEST=402653184 Note The memory limit value can also be read from inside the container by the /sys/fs/cgroup/memory/memory.limit_in_bytes file. 7.4.4. Understanding OOM kill policy OpenShift Container Platform can kill a process in a container if the total memory usage of all the processes in the container exceeds the memory limit, or in serious cases of node memory exhaustion. When a process is Out of Memory (OOM) killed, this might result in the container exiting immediately. If the container PID 1 process receives the SIGKILL , the container will exit immediately. Otherwise, the container behavior is dependent on the behavior of the other processes. For example, a container process exited with code 137, indicating it received a SIGKILL signal. If the container does not exit immediately, an OOM kill is detectable as follows: Access the pod using a remote shell: # oc rsh test Run the following command to see the current OOM kill count in /sys/fs/cgroup/memory/memory.oom_control : USD grep '^oom_kill ' /sys/fs/cgroup/memory/memory.oom_control oom_kill 0 Run the following command to provoke an OOM kill: USD sed -e '' </dev/zero Example output Killed Run the following command to view the exit status of the sed command: USD echo USD? Example output 137 The 137 code indicates the container process exited with code 137, indicating it received a SIGKILL signal. Run the following command to see that the OOM kill counter in /sys/fs/cgroup/memory/memory.oom_control incremented: USD grep '^oom_kill ' /sys/fs/cgroup/memory/memory.oom_control oom_kill 1 If one or more processes in a pod are OOM killed, when the pod subsequently exits, whether immediately or not, it will have phase Failed and reason OOMKilled . An OOM-killed pod might be restarted depending on the value of restartPolicy . If not restarted, controllers such as the replication controller will notice the pod's failed status and create a new pod to replace the old one. Use the follwing command to get the pod status: USD oc get pod test Example output NAME READY STATUS RESTARTS AGE test 0/1 OOMKilled 0 1m If the pod has not restarted, run the following command to view the pod: USD oc get pod test -o yaml Example output ... status: containerStatuses: - name: test ready: false restartCount: 0 state: terminated: exitCode: 137 reason: OOMKilled phase: Failed If restarted, run the following command to view the pod: USD oc get pod test -o yaml Example output ... status: containerStatuses: - name: test ready: true restartCount: 1 lastState: terminated: exitCode: 137 reason: OOMKilled state: running: phase: Running 7.4.5. Understanding pod eviction OpenShift Container Platform may evict a pod from its node when the node's memory is exhausted. Depending on the extent of memory exhaustion, the eviction may or may not be graceful. Graceful eviction implies the main process (PID 1) of each container receiving a SIGTERM signal, then some time later a SIGKILL signal if the process has not exited already. Non-graceful eviction implies the main process of each container immediately receiving a SIGKILL signal. An evicted pod has phase Failed and reason Evicted . It will not be restarted, regardless of the value of restartPolicy . However, controllers such as the replication controller will notice the pod's failed status and create a new pod to replace the old one. USD oc get pod test Example output NAME READY STATUS RESTARTS AGE test 0/1 Evicted 0 1m USD oc get pod test -o yaml Example output ... status: message: 'Pod The node was low on resource: [MemoryPressure].' phase: Failed reason: Evicted 7.5. Configuring your cluster to place pods on overcommitted nodes In an overcommitted state, the sum of the container compute resource requests and limits exceeds the resources available on the system. For example, you might want to use overcommitment in development environments where a trade-off of guaranteed performance for capacity is acceptable. Containers can specify compute resource requests and limits. Requests are used for scheduling your container and provide a minimum service guarantee. Limits constrain the amount of compute resource that can be consumed on your node. The scheduler attempts to optimize the compute resource use across all nodes in your cluster. It places pods onto specific nodes, taking the pods' compute resource requests and nodes' available capacity into consideration. OpenShift Container Platform administrators can control the level of overcommit and manage container density on nodes. You can configure cluster-level overcommit using the ClusterResourceOverride Operator to override the ratio between requests and limits set on developer containers. In conjunction with node overcommit and project memory and CPU limits and defaults , you can adjust the resource limit and request to achieve the desired level of overcommit. Note In OpenShift Container Platform, you must enable cluster-level overcommit. Node overcommitment is enabled by default. See Disabling overcommitment for a node . 7.5.1. Resource requests and overcommitment For each compute resource, a container may specify a resource request and limit. Scheduling decisions are made based on the request to ensure that a node has enough capacity available to meet the requested value. If a container specifies limits, but omits requests, the requests are defaulted to the limits. A container is not able to exceed the specified limit on the node. The enforcement of limits is dependent upon the compute resource type. If a container makes no request or limit, the container is scheduled to a node with no resource guarantees. In practice, the container is able to consume as much of the specified resource as is available with the lowest local priority. In low resource situations, containers that specify no resource requests are given the lowest quality of service. Scheduling is based on resources requested, while quota and hard limits refer to resource limits, which can be set higher than requested resources. The difference between request and limit determines the level of overcommit; for instance, if a container is given a memory request of 1Gi and a memory limit of 2Gi, it is scheduled based on the 1Gi request being available on the node, but could use up to 2Gi; so it is 200% overcommitted. 7.5.2. Cluster-level overcommit using the Cluster Resource Override Operator The Cluster Resource Override Operator is an admission webhook that allows you to control the level of overcommit and manage container density across all the nodes in your cluster. The Operator controls how nodes in specific projects can exceed defined memory and CPU limits. You must install the Cluster Resource Override Operator using the OpenShift Container Platform console or CLI as shown in the following sections. During the installation, you create a ClusterResourceOverride custom resource (CR), where you set the level of overcommit, as shown in the following example: apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4 1 The name must be cluster . 2 Optional. If a container memory limit has been specified or defaulted, the memory request is overridden to this percentage of the limit, between 1-100. The default is 50. 3 Optional. If a container CPU limit has been specified or defaulted, the CPU request is overridden to this percentage of the limit, between 1-100. The default is 25. 4 Optional. If a container memory limit has been specified or defaulted, the CPU limit is overridden to a percentage of the memory limit, if specified. Scaling 1Gi of RAM at 100 percent is equal to 1 CPU core. This is processed prior to overriding the CPU request (if configured). The default is 200. Note The Cluster Resource Override Operator overrides have no effect if limits have not been set on containers. Create a LimitRange object with default limits per individual project or configure limits in Pod specs for the overrides to apply. When configured, overrides can be enabled per-project by applying the following label to the Namespace object for each project: apiVersion: v1 kind: Namespace metadata: .... labels: clusterresourceoverrides.admission.autoscaling.openshift.io/enabled: "true" .... The Operator watches for the ClusterResourceOverride CR and ensures that the ClusterResourceOverride admission webhook is installed into the same namespace as the operator. 7.5.2.1. Installing the Cluster Resource Override Operator using the web console You can use the OpenShift Container Platform web console to install the Cluster Resource Override Operator to help control overcommit in your cluster. Prerequisites The Cluster Resource Override Operator has no effect if limits have not been set on containers. You must specify default limits for a project using a LimitRange object or configure limits in Pod specs for the overrides to apply. Procedure To install the Cluster Resource Override Operator using the OpenShift Container Platform web console: In the OpenShift Container Platform web console, navigate to Home Projects Click Create Project . Specify clusterresourceoverride-operator as the name of the project. Click Create . Navigate to Operators OperatorHub . Choose ClusterResourceOverride Operator from the list of available Operators and click Install . On the Install Operator page, make sure A specific Namespace on the cluster is selected for Installation Mode . Make sure clusterresourceoverride-operator is selected for Installed Namespace . Select an Update Channel and Approval Strategy . Click Install . On the Installed Operators page, click ClusterResourceOverride . On the ClusterResourceOverride Operator details page, click Create Instance . On the Create ClusterResourceOverride page, edit the YAML template to set the overcommit values as needed: apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4 1 The name must be cluster . 2 Optional. Specify the percentage to override the container memory limit, if used, between 1-100. The default is 50. 3 Optional. Specify the percentage to override the container CPU limit, if used, between 1-100. The default is 25. 4 Optional. Specify the percentage to override the container memory limit, if used. Scaling 1Gi of RAM at 100 percent is equal to 1 CPU core. This is processed prior to overriding the CPU request, if configured. The default is 200. Click Create . Check the current state of the admission webhook by checking the status of the cluster custom resource: On the ClusterResourceOverride Operator page, click cluster . On the ClusterResourceOverride Details age, click YAML . The mutatingWebhookConfigurationRef section appears when the webhook is called. apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"operator.autoscaling.openshift.io/v1","kind":"ClusterResourceOverride","metadata":{"annotations":{},"name":"cluster"},"spec":{"podResourceOverride":{"spec":{"cpuRequestToLimitPercent":25,"limitCPUToMemoryPercent":200,"memoryRequestToLimitPercent":50}}}} creationTimestamp: "2019-12-18T22:35:02Z" generation: 1 name: cluster resourceVersion: "127622" selfLink: /apis/operator.autoscaling.openshift.io/v1/clusterresourceoverrides/cluster uid: 978fc959-1717-4bd1-97d0-ae00ee111e8d spec: podResourceOverride: spec: cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 status: .... mutatingWebhookConfigurationRef: 1 apiVersion: admissionregistration.k8s.io/v1beta1 kind: MutatingWebhookConfiguration name: clusterresourceoverrides.admission.autoscaling.openshift.io resourceVersion: "127621" uid: 98b3b8ae-d5ce-462b-8ab5-a729ea8f38f3 .... 1 Reference to the ClusterResourceOverride admission webhook. 7.5.2.2. Installing the Cluster Resource Override Operator using the CLI You can use the OpenShift Container Platform CLI to install the Cluster Resource Override Operator to help control overcommit in your cluster. Prerequisites The Cluster Resource Override Operator has no effect if limits have not been set on containers. You must specify default limits for a project using a LimitRange object or configure limits in Pod specs for the overrides to apply. Procedure To install the Cluster Resource Override Operator using the CLI: Create a namespace for the Cluster Resource Override Operator: Create a Namespace object YAML file (for example, cro-namespace.yaml ) for the Cluster Resource Override Operator: apiVersion: v1 kind: Namespace metadata: name: clusterresourceoverride-operator Create the namespace: USD oc create -f <file-name>.yaml For example: USD oc create -f cro-namespace.yaml Create an Operator group: Create an OperatorGroup object YAML file (for example, cro-og.yaml) for the Cluster Resource Override Operator: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: clusterresourceoverride-operator namespace: clusterresourceoverride-operator spec: targetNamespaces: - clusterresourceoverride-operator Create the Operator Group: USD oc create -f <file-name>.yaml For example: USD oc create -f cro-og.yaml Create a subscription: Create a Subscription object YAML file (for example, cro-sub.yaml) for the Cluster Resource Override Operator: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: clusterresourceoverride namespace: clusterresourceoverride-operator spec: channel: "4.7" name: clusterresourceoverride source: redhat-operators sourceNamespace: openshift-marketplace Create the subscription: USD oc create -f <file-name>.yaml For example: USD oc create -f cro-sub.yaml Create a ClusterResourceOverride custom resource (CR) object in the clusterresourceoverride-operator namespace: Change to the clusterresourceoverride-operator namespace. USD oc project clusterresourceoverride-operator Create a ClusterResourceOverride object YAML file (for example, cro-cr.yaml) for the Cluster Resource Override Operator: apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4 1 The name must be cluster . 2 Optional. Specify the percentage to override the container memory limit, if used, between 1-100. The default is 50. 3 Optional. Specify the percentage to override the container CPU limit, if used, between 1-100. The default is 25. 4 Optional. Specify the percentage to override the container memory limit, if used. Scaling 1Gi of RAM at 100 percent is equal to 1 CPU core. This is processed prior to overriding the CPU request, if configured. The default is 200. Create the ClusterResourceOverride object: USD oc create -f <file-name>.yaml For example: USD oc create -f cro-cr.yaml Verify the current state of the admission webhook by checking the status of the cluster custom resource. USD oc get clusterresourceoverride cluster -n clusterresourceoverride-operator -o yaml The mutatingWebhookConfigurationRef section appears when the webhook is called. Example output apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"operator.autoscaling.openshift.io/v1","kind":"ClusterResourceOverride","metadata":{"annotations":{},"name":"cluster"},"spec":{"podResourceOverride":{"spec":{"cpuRequestToLimitPercent":25,"limitCPUToMemoryPercent":200,"memoryRequestToLimitPercent":50}}}} creationTimestamp: "2019-12-18T22:35:02Z" generation: 1 name: cluster resourceVersion: "127622" selfLink: /apis/operator.autoscaling.openshift.io/v1/clusterresourceoverrides/cluster uid: 978fc959-1717-4bd1-97d0-ae00ee111e8d spec: podResourceOverride: spec: cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 status: .... mutatingWebhookConfigurationRef: 1 apiVersion: admissionregistration.k8s.io/v1beta1 kind: MutatingWebhookConfiguration name: clusterresourceoverrides.admission.autoscaling.openshift.io resourceVersion: "127621" uid: 98b3b8ae-d5ce-462b-8ab5-a729ea8f38f3 .... 1 Reference to the ClusterResourceOverride admission webhook. 7.5.2.3. Configuring cluster-level overcommit The Cluster Resource Override Operator requires a ClusterResourceOverride custom resource (CR) and a label for each project where you want the Operator to control overcommit. Prerequisites The Cluster Resource Override Operator has no effect if limits have not been set on containers. You must specify default limits for a project using a LimitRange object or configure limits in Pod specs for the overrides to apply. Procedure To modify cluster-level overcommit: Edit the ClusterResourceOverride CR: apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 1 cpuRequestToLimitPercent: 25 2 limitCPUToMemoryPercent: 200 3 1 Optional. Specify the percentage to override the container memory limit, if used, between 1-100. The default is 50. 2 Optional. Specify the percentage to override the container CPU limit, if used, between 1-100. The default is 25. 3 Optional. Specify the percentage to override the container memory limit, if used. Scaling 1Gi of RAM at 100 percent is equal to 1 CPU core. This is processed prior to overriding the CPU request, if configured. The default is 200. Ensure the following label has been added to the Namespace object for each project where you want the Cluster Resource Override Operator to control overcommit: apiVersion: v1 kind: Namespace metadata: ... labels: clusterresourceoverrides.admission.autoscaling.openshift.io/enabled: "true" 1 ... 1 Add this label to each project. 7.5.3. Node-level overcommit You can use various ways to control overcommit on specific nodes, such as quality of service (QOS) guarantees, CPU limits, or reserve resources. You can also disable overcommit for specific nodes and specific projects. 7.5.3.1. Understanding compute resources and containers The node-enforced behavior for compute resources is specific to the resource type. 7.5.3.1.1. Understanding container CPU requests A container is guaranteed the amount of CPU it requests and is additionally able to consume excess CPU available on the node, up to any limit specified by the container. If multiple containers are attempting to use excess CPU, CPU time is distributed based on the amount of CPU requested by each container. For example, if one container requested 500m of CPU time and another container requested 250m of CPU time, then any extra CPU time available on the node is distributed among the containers in a 2:1 ratio. If a container specified a limit, it will be throttled not to use more CPU than the specified limit. CPU requests are enforced using the CFS shares support in the Linux kernel. By default, CPU limits are enforced using the CFS quota support in the Linux kernel over a 100ms measuring interval, though this can be disabled. 7.5.3.1.2. Understanding container memory requests A container is guaranteed the amount of memory it requests. A container can use more memory than requested, but once it exceeds its requested amount, it could be terminated in a low memory situation on the node. If a container uses less memory than requested, it will not be terminated unless system tasks or daemons need more memory than was accounted for in the node's resource reservation. If a container specifies a limit on memory, it is immediately terminated if it exceeds the limit amount. 7.5.3.2. Understanding overcomitment and quality of service classes A node is overcommitted when it has a pod scheduled that makes no request, or when the sum of limits across all pods on that node exceeds available machine capacity. In an overcommitted environment, it is possible that the pods on the node will attempt to use more compute resource than is available at any given point in time. When this occurs, the node must give priority to one pod over another. The facility used to make this decision is referred to as a Quality of Service (QoS) Class. For each compute resource, a container is divided into one of three QoS classes with decreasing order of priority: Table 7.19. Quality of Service Classes Priority Class Name Description 1 (highest) Guaranteed If limits and optionally requests are set (not equal to 0) for all resources and they are equal, then the container is classified as Guaranteed . 2 Burstable If requests and optionally limits are set (not equal to 0) for all resources, and they are not equal, then the container is classified as Burstable . 3 (lowest) BestEffort If requests and limits are not set for any of the resources, then the container is classified as BestEffort . Memory is an incompressible resource, so in low memory situations, containers that have the lowest priority are terminated first: Guaranteed containers are considered top priority, and are guaranteed to only be terminated if they exceed their limits, or if the system is under memory pressure and there are no lower priority containers that can be evicted. Burstable containers under system memory pressure are more likely to be terminated once they exceed their requests and no other BestEffort containers exist. BestEffort containers are treated with the lowest priority. Processes in these containers are first to be terminated if the system runs out of memory. 7.5.3.2.1. Understanding how to reserve memory across quality of service tiers You can use the qos-reserved parameter to specify a percentage of memory to be reserved by a pod in a particular QoS level. This feature attempts to reserve requested resources to exclude pods from lower OoS classes from using resources requested by pods in higher QoS classes. OpenShift Container Platform uses the qos-reserved parameter as follows: A value of qos-reserved=memory=100% will prevent the Burstable and BestEffort QOS classes from consuming memory that was requested by a higher QoS class. This increases the risk of inducing OOM on BestEffort and Burstable workloads in favor of increasing memory resource guarantees for Guaranteed and Burstable workloads. A value of qos-reserved=memory=50% will allow the Burstable and BestEffort QOS classes to consume half of the memory requested by a higher QoS class. A value of qos-reserved=memory=0% will allow a Burstable and BestEffort QoS classes to consume up to the full node allocatable amount if available, but increases the risk that a Guaranteed workload will not have access to requested memory. This condition effectively disables this feature. 7.5.3.3. Understanding swap memory and QOS You can disable swap by default on your nodes to preserve quality of service (QOS) guarantees. Otherwise, physical resources on a node can oversubscribe, affecting the resource guarantees the Kubernetes scheduler makes during pod placement. For example, if two guaranteed pods have reached their memory limit, each container could start using swap memory. Eventually, if there is not enough swap space, processes in the pods can be terminated due to the system being oversubscribed. Failing to disable swap results in nodes not recognizing that they are experiencing MemoryPressure , resulting in pods not receiving the memory they made in their scheduling request. As a result, additional pods are placed on the node to further increase memory pressure, ultimately increasing your risk of experiencing a system out of memory (OOM) event. Important If swap is enabled, any out-of-resource handling eviction thresholds for available memory will not work as expected. Take advantage of out-of-resource handling to allow pods to be evicted from a node when it is under memory pressure, and rescheduled on an alternative node that has no such pressure. 7.5.3.4. Understanding nodes overcommitment In an overcommitted environment, it is important to properly configure your node to provide best system behavior. When the node starts, it ensures that the kernel tunable flags for memory management are set properly. The kernel should never fail memory allocations unless it runs out of physical memory. To ensure this behavior, OpenShift Container Platform configures the kernel to always overcommit memory by setting the vm.overcommit_memory parameter to 1 , overriding the default operating system setting. OpenShift Container Platform also configures the kernel not to panic when it runs out of memory by setting the vm.panic_on_oom parameter to 0 . A setting of 0 instructs the kernel to call oom_killer in an Out of Memory (OOM) condition, which kills processes based on priority You can view the current setting by running the following commands on your nodes: USD sysctl -a |grep commit Example output vm.overcommit_memory = 1 USD sysctl -a |grep panic Example output vm.panic_on_oom = 0 Note The above flags should already be set on nodes, and no further action is required. You can also perform the following configurations for each node: Disable or enforce CPU limits using CPU CFS quotas Reserve resources for system processes Reserve memory across quality of service tiers 7.5.3.5. Disabling or enforcing CPU limits using CPU CFS quotas Nodes by default enforce specified CPU limits using the Completely Fair Scheduler (CFS) quota support in the Linux kernel. If you disable CPU limit enforcement, it is important to understand the impact on your node: If a container has a CPU request, the request continues to be enforced by CFS shares in the Linux kernel. If a container does not have a CPU request, but does have a CPU limit, the CPU request defaults to the specified CPU limit, and is enforced by CFS shares in the Linux kernel. If a container has both a CPU request and limit, the CPU request is enforced by CFS shares in the Linux kernel, and the CPU limit has no impact on the node. Prerequisites Obtain the label associated with the static MachineConfigPool CRD for the type of node you want to configure. Perform one of the following steps: View the machine config pool: USD oc describe machineconfigpool <name> For example: USD oc describe machineconfigpool worker Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: 2019-02-08T14:52:39Z generation: 1 labels: custom-kubelet: small-pods 1 1 If a label has been added it appears under labels . If the label is not present, add a key/value pair: USD oc label machineconfigpool worker custom-kubelet=small-pods Procedure Create a custom resource (CR) for your configuration change. Sample configuration for a disabling CPU limits apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: disable-cpu-units 1 spec: machineConfigPoolSelector: matchLabels: custom-kubelet: small-pods 2 kubeletConfig: cpuCfsQuota: 3 - "false" 1 Assign a name to CR. 2 Specify the label to apply the configuration change. 3 Set the cpuCfsQuota parameter to false . 7.5.3.6. Reserving resources for system processes To provide more reliable scheduling and minimize node resource overcommitment, each node can reserve a portion of its resources for use by system daemons that are required to run on your node for your cluster to function. In particular, it is recommended that you reserve resources for incompressible resources such as memory. Procedure To explicitly reserve resources for non-pod processes, allocate node resources by specifying resources available for scheduling. For more details, see Allocating Resources for Nodes. 7.5.3.7. Disabling overcommitment for a node When enabled, overcommitment can be disabled on each node. Procedure To disable overcommitment in a node run the following command on that node: USD sysctl -w vm.overcommit_memory=0 7.5.4. Project-level limits To help control overcommit, you can set per-project resource limit ranges, specifying memory and CPU limits and defaults for a project that overcommit cannot exceed. For information on project-level resource limits, see Additional resources. Alternatively, you can disable overcommitment for specific projects. 7.5.4.1. Disabling overcommitment for a project When enabled, overcommitment can be disabled per-project. For example, you can allow infrastructure components to be configured independently of overcommitment. Procedure To disable overcommitment in a project: Edit the project object file Add the following annotation: quota.openshift.io/cluster-resource-override-enabled: "false" Create the project object: USD oc create -f <file-name>.yaml 7.5.5. Additional resources For information setting per-project resource limits, see Setting deployment resources . For more information about explicitly reserving resources for non-pod processes, see Allocating resources for nodes . 7.6. Enabling OpenShift Container Platform features using FeatureGates As an administrator, you can use feature gates to enable features that are not part of the default set of features. 7.6.1. Understanding feature gates You can use the FeatureGate custom resource (CR) to enable specific feature sets in your cluster. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. You can activate the following feature set by using the FeatureGate CR: IPv6DualStackNoUpgrade . This feature gate enables the dual-stack networking mode in your cluster. Dual-stack networking supports the use of IPv4 and IPv6 simultaneously. Enabling this feature set is not supported , cannot be undone, and prevents updates. This feature set is not recommended on production clusters. 7.6.2. Enabling feature sets using the web console You can use the OpenShift Container Platform web console to enable feature sets for all of the nodes in a cluster by editing the FeatureGate custom resource (CR). Procedure To enable feature sets: In the OpenShift Container Platform web console, switch to the Administration Custom Resource Definitions page. On the Custom Resource Definitions page, click FeatureGate . On the Custom Resource Definition Details page, click the Instances tab. Click the cluster feature gate, then click the YAML tab. Edit the cluster instance to add specific feature sets: Sample Feature Gate custom resource apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster 1 .... spec: featureSet: IPv6DualStackNoUpgrade 2 1 The name of the FeatureGate CR must be cluster . 2 Add the IPv6DualStackNoUpgrade feature set to enable the dual-stack networking mode. After you save the changes, new machine configs are created, the machine config pools are updated, and scheduling on each node is disabled while the change is being applied. Note Enabling the IPv6DualStackNoUpgrade feature set cannot be undone and prevents updates. This feature set is not recommended on production clusters. Verification You can verify that the feature gates are enabled by looking at the kubelet.conf file on a node after the nodes return to the ready state. From the Administrator perspective in the web console, navigate to Compute Nodes . Select a node. In the Node details page, click Terminal . In the terminal window, change your root directory to the host: sh-4.2# chroot /host View the kubelet.conf file: sh-4.2# cat /etc/kubernetes/kubelet.conf Sample output ... featureGates: InsightsOperatorPullingSCA: true, LegacyNodeRoleBehavior: false ... The features that are listed as true are enabled on your cluster. Note The features listed vary depending upon the OpenShift Container Platform version. 7.6.3. Enabling feature sets using the CLI You can use the OpenShift CLI ( oc ) to enable feature sets for all of the nodes in a cluster by editing the FeatureGate custom resource (CR). Prerequisites You have installed the OpenShift CLI ( oc ). Procedure To enable feature sets: Edit the FeatureGate CR named cluster : USD oc edit featuregate cluster Sample FeatureGate custom resource apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster 1 spec: featureSet: IPv6DualStackNoUpgrade 2 1 The name of the FeatureGate CR must be cluster . 2 Add the IPv6DualStackNoUpgrade feature set to enable the dual-stack networking mode. After you save the changes, new machine configs are created, the machine config pools are updated, and scheduling on each node is disabled while the change is being applied. Note Enabling the IPv6DualStackNoUpgrade feature set cannot be undone and prevents updates. This feature set is not recommended on production clusters. Verification You can verify that the feature gates are enabled by looking at the kubelet.conf file on a node after the nodes return to the ready state. Start a debug session for a node: USD oc debug node/<node_name> Change your root directory to the host: sh-4.2# chroot /host View the kubelet.conf file: sh-4.2# cat /etc/kubernetes/kubelet.conf Sample output ... featureGates: InsightsOperatorPullingSCA: true, LegacyNodeRoleBehavior: false ... The features that are listed as true are enabled on your cluster. Note The features listed vary depending upon the OpenShift Container Platform version.
[ "oc get events [-n <project>] 1", "oc get events -n openshift-config", "LAST SEEN TYPE REASON OBJECT MESSAGE 97m Normal Scheduled pod/dapi-env-test-pod Successfully assigned openshift-config/dapi-env-test-pod to ip-10-0-171-202.ec2.internal 97m Normal Pulling pod/dapi-env-test-pod pulling image \"gcr.io/google_containers/busybox\" 97m Normal Pulled pod/dapi-env-test-pod Successfully pulled image \"gcr.io/google_containers/busybox\" 97m Normal Created pod/dapi-env-test-pod Created container 9m5s Warning FailedCreatePodSandBox pod/dapi-volume-test-pod Failed create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dapi-volume-test-pod_openshift-config_6bc60c1f-452e-11e9-9140-0eec59c23068_0(748c7a40db3d08c07fb4f9eba774bd5effe5f0d5090a242432a73eee66ba9e22): Multus: Err adding pod to network \"openshift-sdn\": cannot set \"openshift-sdn\" ifname to \"eth0\": no netns: failed to Statfs \"/proc/33366/ns/net\": no such file or directory 8m31s Normal Scheduled pod/dapi-volume-test-pod Successfully assigned openshift-config/dapi-volume-test-pod to ip-10-0-171-202.ec2.internal", "apiVersion: v1 kind: Pod metadata: name: small-pod labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v4 imagePullPolicy: Always resources: limits: cpu: 150m memory: 100Mi requests: cpu: 150m memory: 100Mi", "podman login registry.redhat.io", "podman pull registry.redhat.io/openshift4/ose-cluster-capacity", "podman run -v USDHOME/.kube:/kube:Z -v USD(pwd):/cc:Z ose-cluster-capacity /bin/cluster-capacity --kubeconfig /kube/config --podspec /cc/pod-spec.yaml --verbose 1", "small-pod pod requirements: - CPU: 150m - Memory: 100Mi The cluster can schedule 88 instance(s) of the pod small-pod. Termination reason: Unschedulable: 0/5 nodes are available: 2 Insufficient cpu, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate. Pod distribution among nodes: small-pod - 192.168.124.214: 45 instance(s) - 192.168.124.120: 43 instance(s)", "cat << EOF| oc create -f -", "kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: cluster-capacity-role rules: - apiGroups: [\"\"] resources: [\"pods\", \"nodes\", \"persistentvolumeclaims\", \"persistentvolumes\", \"services\", \"replicationcontrollers\"] verbs: [\"get\", \"watch\", \"list\"] - apiGroups: [\"apps\"] resources: [\"replicasets\", \"statefulsets\"] verbs: [\"get\", \"watch\", \"list\"] - apiGroups: [\"policy\"] resources: [\"poddisruptionbudgets\"] verbs: [\"get\", \"watch\", \"list\"] - apiGroups: [\"storage.k8s.io\"] resources: [\"storageclasses\"] verbs: [\"get\", \"watch\", \"list\"] EOF", "oc create sa cluster-capacity-sa", "oc adm policy add-cluster-role-to-user cluster-capacity-role system:serviceaccount:default:cluster-capacity-sa", "apiVersion: v1 kind: Pod metadata: name: small-pod labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v4 imagePullPolicy: Always resources: limits: cpu: 150m memory: 100Mi requests: cpu: 150m memory: 100Mi", "oc create configmap cluster-capacity-configmap --from-file=pod.yaml=pod.yaml", "apiVersion: batch/v1 kind: Job metadata: name: cluster-capacity-job spec: parallelism: 1 completions: 1 template: metadata: name: cluster-capacity-pod spec: containers: - name: cluster-capacity image: openshift/origin-cluster-capacity imagePullPolicy: \"Always\" volumeMounts: - mountPath: /test-pod name: test-volume env: - name: CC_INCLUSTER 1 value: \"true\" command: - \"/bin/sh\" - \"-ec\" - | /bin/cluster-capacity --podspec=/test-pod/pod.yaml --verbose restartPolicy: \"Never\" serviceAccountName: cluster-capacity-sa volumes: - name: test-volume configMap: name: cluster-capacity-configmap", "oc create -f cluster-capacity-job.yaml", "oc logs jobs/cluster-capacity-job", "small-pod pod requirements: - CPU: 150m - Memory: 100Mi The cluster can schedule 52 instance(s) of the pod small-pod. Termination reason: Unschedulable: No nodes are available that match all of the following predicates:: Insufficient cpu (2). Pod distribution among nodes: small-pod - 192.168.124.214: 26 instance(s) - 192.168.124.120: 26 instance(s)", "apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" spec: limits: - type: \"Container\" max: cpu: \"2\" memory: \"1Gi\" min: cpu: \"100m\" memory: \"4Mi\" default: cpu: \"300m\" memory: \"200Mi\" defaultRequest: cpu: \"200m\" memory: \"100Mi\" maxLimitRequestRatio: cpu: \"10\"", "apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: \"Container\" max: cpu: \"2\" 2 memory: \"1Gi\" 3 min: cpu: \"100m\" 4 memory: \"4Mi\" 5 default: cpu: \"300m\" 6 memory: \"200Mi\" 7 defaultRequest: cpu: \"200m\" 8 memory: \"100Mi\" 9 maxLimitRequestRatio: cpu: \"10\" 10", "apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: \"Pod\" max: cpu: \"2\" 2 memory: \"1Gi\" 3 min: cpu: \"200m\" 4 memory: \"6Mi\" 5 maxLimitRequestRatio: cpu: \"10\" 6", "apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: openshift.io/Image max: storage: 1Gi 2", "apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: openshift.io/ImageStream max: openshift.io/image-tags: 20 2 openshift.io/images: 30 3", "apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: \"PersistentVolumeClaim\" min: storage: \"2Gi\" 2 max: storage: \"50Gi\" 3", "apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: \"Pod\" 2 max: cpu: \"2\" memory: \"1Gi\" min: cpu: \"200m\" memory: \"6Mi\" - type: \"Container\" 3 max: cpu: \"2\" memory: \"1Gi\" min: cpu: \"100m\" memory: \"4Mi\" default: 4 cpu: \"300m\" memory: \"200Mi\" defaultRequest: 5 cpu: \"200m\" memory: \"100Mi\" maxLimitRequestRatio: 6 cpu: \"10\" - type: openshift.io/Image 7 max: storage: 1Gi - type: openshift.io/ImageStream 8 max: openshift.io/image-tags: 20 openshift.io/images: 30 - type: \"PersistentVolumeClaim\" 9 min: storage: \"2Gi\" max: storage: \"50Gi\"", "oc create -f <limit_range_file> -n <project> 1", "oc get limits -n demoproject", "NAME CREATED AT resource-limits 2020-07-15T17:14:23Z", "oc describe limits resource-limits -n demoproject", "Name: resource-limits Namespace: demoproject Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio ---- -------- --- --- --------------- ------------- ----------------------- Pod cpu 200m 2 - - - Pod memory 6Mi 1Gi - - - Container cpu 100m 2 200m 300m 10 Container memory 4Mi 1Gi 100Mi 200Mi - openshift.io/Image storage - 1Gi - - - openshift.io/ImageStream openshift.io/image - 12 - - - openshift.io/ImageStream openshift.io/image-tags - 10 - - - PersistentVolumeClaim storage - 50Gi - - -", "oc delete limits <limit_name>", "-XX:+UseParallelGC -XX:MinHeapFreeRatio=5 -XX:MaxHeapFreeRatio=10 -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90.", "JAVA_TOOL_OPTIONS=\"-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -Dsun.zip.disableMemoryMapping=true\"", "apiVersion: v1 kind: Pod metadata: name: test spec: containers: - name: test image: fedora:latest command: - sleep - \"3600\" env: - name: MEMORY_REQUEST 1 valueFrom: resourceFieldRef: containerName: test resource: requests.memory - name: MEMORY_LIMIT 2 valueFrom: resourceFieldRef: containerName: test resource: limits.memory resources: requests: memory: 384Mi limits: memory: 512Mi", "oc create -f <file-name>.yaml", "oc rsh test", "env | grep MEMORY | sort", "MEMORY_LIMIT=536870912 MEMORY_REQUEST=402653184", "oc rsh test", "grep '^oom_kill ' /sys/fs/cgroup/memory/memory.oom_control oom_kill 0", "sed -e '' </dev/zero", "Killed", "echo USD?", "137", "grep '^oom_kill ' /sys/fs/cgroup/memory/memory.oom_control oom_kill 1", "oc get pod test", "NAME READY STATUS RESTARTS AGE test 0/1 OOMKilled 0 1m", "oc get pod test -o yaml", "status: containerStatuses: - name: test ready: false restartCount: 0 state: terminated: exitCode: 137 reason: OOMKilled phase: Failed", "oc get pod test -o yaml", "status: containerStatuses: - name: test ready: true restartCount: 1 lastState: terminated: exitCode: 137 reason: OOMKilled state: running: phase: Running", "oc get pod test", "NAME READY STATUS RESTARTS AGE test 0/1 Evicted 0 1m", "oc get pod test -o yaml", "status: message: 'Pod The node was low on resource: [MemoryPressure].' phase: Failed reason: Evicted", "apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4", "apiVersion: v1 kind: Namespace metadata: . labels: clusterresourceoverrides.admission.autoscaling.openshift.io/enabled: \"true\" .", "apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4", "apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"operator.autoscaling.openshift.io/v1\",\"kind\":\"ClusterResourceOverride\",\"metadata\":{\"annotations\":{},\"name\":\"cluster\"},\"spec\":{\"podResourceOverride\":{\"spec\":{\"cpuRequestToLimitPercent\":25,\"limitCPUToMemoryPercent\":200,\"memoryRequestToLimitPercent\":50}}}} creationTimestamp: \"2019-12-18T22:35:02Z\" generation: 1 name: cluster resourceVersion: \"127622\" selfLink: /apis/operator.autoscaling.openshift.io/v1/clusterresourceoverrides/cluster uid: 978fc959-1717-4bd1-97d0-ae00ee111e8d spec: podResourceOverride: spec: cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 status: . mutatingWebhookConfigurationRef: 1 apiVersion: admissionregistration.k8s.io/v1beta1 kind: MutatingWebhookConfiguration name: clusterresourceoverrides.admission.autoscaling.openshift.io resourceVersion: \"127621\" uid: 98b3b8ae-d5ce-462b-8ab5-a729ea8f38f3 .", "apiVersion: v1 kind: Namespace metadata: name: clusterresourceoverride-operator", "oc create -f <file-name>.yaml", "oc create -f cro-namespace.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: clusterresourceoverride-operator namespace: clusterresourceoverride-operator spec: targetNamespaces: - clusterresourceoverride-operator", "oc create -f <file-name>.yaml", "oc create -f cro-og.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: clusterresourceoverride namespace: clusterresourceoverride-operator spec: channel: \"4.7\" name: clusterresourceoverride source: redhat-operators sourceNamespace: openshift-marketplace", "oc create -f <file-name>.yaml", "oc create -f cro-sub.yaml", "oc project clusterresourceoverride-operator", "apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4", "oc create -f <file-name>.yaml", "oc create -f cro-cr.yaml", "oc get clusterresourceoverride cluster -n clusterresourceoverride-operator -o yaml", "apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"operator.autoscaling.openshift.io/v1\",\"kind\":\"ClusterResourceOverride\",\"metadata\":{\"annotations\":{},\"name\":\"cluster\"},\"spec\":{\"podResourceOverride\":{\"spec\":{\"cpuRequestToLimitPercent\":25,\"limitCPUToMemoryPercent\":200,\"memoryRequestToLimitPercent\":50}}}} creationTimestamp: \"2019-12-18T22:35:02Z\" generation: 1 name: cluster resourceVersion: \"127622\" selfLink: /apis/operator.autoscaling.openshift.io/v1/clusterresourceoverrides/cluster uid: 978fc959-1717-4bd1-97d0-ae00ee111e8d spec: podResourceOverride: spec: cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 status: . mutatingWebhookConfigurationRef: 1 apiVersion: admissionregistration.k8s.io/v1beta1 kind: MutatingWebhookConfiguration name: clusterresourceoverrides.admission.autoscaling.openshift.io resourceVersion: \"127621\" uid: 98b3b8ae-d5ce-462b-8ab5-a729ea8f38f3 .", "apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 1 cpuRequestToLimitPercent: 25 2 limitCPUToMemoryPercent: 200 3", "apiVersion: v1 kind: Namespace metadata: labels: clusterresourceoverrides.admission.autoscaling.openshift.io/enabled: \"true\" 1", "sysctl -a |grep commit", "vm.overcommit_memory = 1", "sysctl -a |grep panic", "vm.panic_on_oom = 0", "oc describe machineconfigpool <name>", "oc describe machineconfigpool worker", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: 2019-02-08T14:52:39Z generation: 1 labels: custom-kubelet: small-pods 1", "oc label machineconfigpool worker custom-kubelet=small-pods", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: disable-cpu-units 1 spec: machineConfigPoolSelector: matchLabels: custom-kubelet: small-pods 2 kubeletConfig: cpuCfsQuota: 3 - \"false\"", "sysctl -w vm.overcommit_memory=0", "quota.openshift.io/cluster-resource-override-enabled: \"false\"", "oc create -f <file-name>.yaml", "apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster 1 . spec: featureSet: IPv6DualStackNoUpgrade 2", "sh-4.2# chroot /host", "sh-4.2# cat /etc/kubernetes/kubelet.conf", "featureGates: InsightsOperatorPullingSCA: true, LegacyNodeRoleBehavior: false", "oc edit featuregate cluster", "apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster 1 spec: featureSet: IPv6DualStackNoUpgrade 2", "oc debug node/<node_name>", "sh-4.2# chroot /host", "sh-4.2# cat /etc/kubernetes/kubelet.conf", "featureGates: InsightsOperatorPullingSCA: true, LegacyNodeRoleBehavior: false" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/nodes/working-with-clusters
5.4.16. RAID Logical Volumes
5.4.16. RAID Logical Volumes As of the Red Hat Enterprise Linux 6.3 release, LVM supports RAID4/5/6 and a new implementation of mirroring. The latest implementation of mirroring differs from the implementation of mirroring (documented in Section 5.4.3, "Creating Mirrored Volumes" ) in the following ways: The segment type for the new implementation of mirroring is raid1 . For the earlier implementation, the segment type is mirror . The new implementation of mirroring leverages MD software RAID, just as for the RAID 4/5/6 implementations. The new implementation of mirroring maintains a fully redundant bitmap area for each mirror image, which increases its fault handling capabilities. This means that there is no --mirrorlog option or --corelog option for mirrors created with this segment type. The new implementation of mirroring can handle transient failures. Mirror images can be temporarily split from the array and merged back into the array later. The new implementation of mirroring supports snapshots (as do the higher-level RAID implementations). The new RAID implementations are not cluster-aware. You cannot create an LVM RAID logical volume in a clustered volume group. For information on how failures are handled by the RAID logical volumes, see Section 5.4.16.8, "Setting a RAID fault policy" . The remainder of this section describes the following administrative tasks you can perform on LVM RAID devices: Section 5.4.16.1, "Creating a RAID Logical Volume" Section 5.4.16.2, "Converting a Linear Device to a RAID Device" Section 5.4.16.3, "Converting an LVM RAID1 Logical Volume to an LVM Linear Logical Volume" Section 5.4.16.4, "Converting a Mirrored LVM Device to a RAID1 Device" Section 5.4.16.5, "Changing the Number of Images in an Existing RAID1 Device" Section 5.4.16.6, "Splitting off a RAID Image as a Separate Logical Volume" Section 5.4.16.7, "Splitting and Merging a RAID Image" Section 5.4.16.8, "Setting a RAID fault policy" Section 5.4.16.9, "Replacing a RAID device" Section 5.4.16.10, "Scrubbing a RAID Logical Volume" Section 5.4.16.11, "Controlling I/O Operations on a RAID1 Logical Volume" 5.4.16.1. Creating a RAID Logical Volume To create a RAID logical volume, you specify a raid type as the --type argument of the lvcreate command. Usually when you create a logical volume with the lvcreate command, the --type argument is implicit. For example, when you specify the -i stripes argument, the lvcreate command assumes the --type stripe option. When you specify the -m mirrors argument, the lvcreate command assumes the --type mirror option. When you create a RAID logical volume, however, you must explicitly specify the segment type you desire. The possible RAID segment types are described in Table 5.1, "RAID Segment Types" . Table 5.1. RAID Segment Types Segment type Description raid1 RAID1 mirroring raid4 RAID4 dedicated parity disk raid5 Same as raid5_ls raid5_la RAID5 left asymmetric. Rotating parity 0 with data continuation raid5_ra RAID5 right asymmetric. Rotating parity N with data continuation raid5_ls RAID5 left symmetric. Rotating parity 0 with data restart raid5_rs RAID5 right symmetric. Rotating parity N with data restart raid6 Same as raid6_zr raid6_zr RAID6 zero restart Rotating parity zero (left-to-right) with data restart raid6_nr RAID6 N restart Rotating parity N (left-to-right) with data restart raid6_nc RAID6 N continue Rotating parity N (left-to-right) with data continuation raid10 (Red Hat Enterprise Linux 6.4 and later Striped mirrors Striping of mirror sets For most users, specifying one of the five available primary types ( raid1 , raid4 , raid5 , raid6 , raid10 ) should be sufficient. For more information on the different algorithms used by RAID 5/6, see chapter four of the Common RAID Disk Data Format Specification at http://www.snia.org/sites/default/files/SNIA_DDF_Technical_Position_v2.0.pdf . When you create a RAID logical volume, LVM creates a metadata subvolume that is one extent in size for every data or parity subvolume in the array. For example, creating a 2-way RAID1 array results in two metadata subvolumes ( lv_rmeta_0 and lv_rmeta_1 ) and two data subvolumes ( lv_rimage_0 and lv_rimage_1 ). Similarly, creating a 3-way stripe (plus 1 implicit parity device) RAID4 results in 4 metadata subvolumes ( lv_rmeta_0 , lv_rmeta_1 , lv_rmeta_2 , and lv_rmeta_3 ) and 4 data subvolumes ( lv_rimage_0 , lv_rimage_1 , lv_rimage_2 , and lv_rimage_3 ). The following command creates a 2-way RAID1 array named my_lv in the volume group my_vg that is 1G in size. You can create RAID1 arrays with different numbers of copies according to the value you specify for the -m argument. Although the -m argument is the same argument used to specify the number of copies for the mirror implementation, in this case you override the default segment type mirror by explicitly setting the segment type as raid1 . Similarly, you specify the number of stripes for a RAID 4/5/6 logical volume with the familiar -i argument , overriding the default segment type with the desired RAID type. You can also specify the stripe size with the -I argument. Note You can set the default mirror segment type to raid1 by changing mirror_segtype_default in the lvm.conf file. The following command creates a RAID5 array (3 stripes + 1 implicit parity drive) named my_lv in the volume group my_vg that is 1G in size. Note that you specify the number of stripes just as you do for an LVM striped volume; the correct number of parity drives is added automatically. The following command creates a RAID6 array (3 stripes + 2 implicit parity drives) named my_lv in the volume group my_vg that is 1G in size. After you have created a RAID logical volume with LVM, you can activate, change, remove, display, and use the volume just as you would any other LVM logical volume. When you create RAID10 logical volumes, the background I/O required to initialize the logical volumes with a sync operation can crowd out other I/O operations to LVM devices, such as updates to volume group metadata, particularly when you are creating many RAID logical volumes. This can cause the other LVM operations to slow down. As of Red Hat Enterprise Linux 6.5, you can control the rate at which a RAID logical volume is initialized by implementing recovery throttling. You control the rate at which sync operations are performed by setting the minimum and maximum I/O rate for those operations with the --minrecoveryrate and --maxrecoveryrate options of the lvcreate command. You specify these options as follows. --maxrecoveryrate Rate [bBsSkKmMgG] Sets the maximum recovery rate for a RAID logical volume so that it will not crowd out nominal I/O operations. The Rate is specified as an amount per second for each device in the array. If no suffix is given, then kiB/sec/device is assumed. Setting the recovery rate to 0 means it will be unbounded. --minrecoveryrate Rate [bBsSkKmMgG] Sets the minimum recovery rate for a RAID logical volume to ensure that I/O for sync operations achieves a minimum throughput, even when heavy nominal I/O is present. The Rate is specified as an amount per second for each device in the array. If no suffix is given, then kiB/sec/device is assumed. The following command creates a 2-way RAID10 array with 3 stripes that is 10G is size with a maximum recovery rate of 128 kiB/sec/device. The array is named my_lv and is in the volume group my_vg . You can also specify minimum and maximum recovery rates for a RAID scrubbing operation. For information on RAID scrubbing, see Section 5.4.16.10, "Scrubbing a RAID Logical Volume" .
[ "lvcreate --type raid1 -m 1 -L 1G -n my_lv my_vg", "lvcreate --type raid5 -i 3 -L 1G -n my_lv my_vg", "lvcreate --type raid6 -i 3 -L 1G -n my_lv my_vg", "lvcreate --type raid10 -i 2 -m 1 -L 10G --maxrecoveryrate 128 -n my_lv my_vg" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/raid_volumes
Securing Red Hat Quay
Securing Red Hat Quay Red Hat Quay 3.13 Securing Red Hat Quay Red Hat OpenShift Documentation Team
[ "openssl genrsa -out rootCA.key 2048", "openssl req -x509 -new -nodes -key rootCA.key -sha256 -days 1024 -out rootCA.pem", "Country Name (2 letter code) [XX]:IE State or Province Name (full name) []:GALWAY Locality Name (eg, city) [Default City]:GALWAY Organization Name (eg, company) [Default Company Ltd]:QUAY Organizational Unit Name (eg, section) []:DOCS Common Name (eg, your name or your server's hostname) []:quay-server.example.com", "openssl genrsa -out ssl.key 2048", "openssl req -new -key ssl.key -out ssl.csr", "Country Name (2 letter code) [XX]:IE State or Province Name (full name) []:GALWAY Locality Name (eg, city) [Default City]:GALWAY Organization Name (eg, company) [Default Company Ltd]:QUAY Organizational Unit Name (eg, section) []:DOCS Common Name (eg, your name or your server's hostname) []:quay-server.example.com Email Address []:", "[req] req_extensions = v3_req distinguished_name = req_distinguished_name [req_distinguished_name] [ v3_req ] basicConstraints = CA:FALSE keyUsage = nonRepudiation, digitalSignature, keyEncipherment subjectAltName = @alt_names [alt_names] DNS.1 = <quay-server.example.com> IP.1 = 192.168.1.112", "openssl x509 -req -in ssl.csr -CA rootCA.pem -CAkey rootCA.key -CAcreateserial -out ssl.cert -days 356 -extensions v3_req -extfile openssl.cnf", "ls /path/to/certificates", "rootCA.key ssl-bundle.cert ssl.key custom-ssl-config-bundle-secret.yaml rootCA.pem ssl.cert openssl.cnf rootCA.srl ssl.csr", "cp ~/ssl.cert ~/ssl.key /path/to/configuration_directory", "cd /path/to/configuration_directory", "SERVER_HOSTNAME: <quay-server.example.com> PREFERRED_URL_SCHEME: https", "cat rootCA.pem >> ssl.cert", "sudo podman stop <quay_container_name>", "sudo podman run -d --rm -p 80:8080 -p 443:8443 --name=quay -v USDQUAY/config:/conf/stack:Z -v USDQUAY/storage:/datastorage:Z registry.redhat.io/quay/quay-rhel8:v3.13.3", "sudo cp rootCA.pem /etc/containers/certs.d/quay-server.example.com/ca.crt", "sudo podman login quay-server.example.com", "Login Succeeded!", "sudo cp rootCA.pem /etc/pki/ca-trust/source/anchors/", "sudo update-ca-trust extract", "trust list | grep quay label: quay-server.example.com", "sudo rm /etc/pki/ca-trust/source/anchors/rootCA.pem", "sudo update-ca-trust extract", "trust list | grep quay", "touch custom-ssl-config-bundle-secret.yaml", "oc -n <namespace> create secret generic custom-ssl-config-bundle-secret --from-file=config.yaml=</path/to/config.yaml> \\ 1 --from-file=ssl.cert=</path/to/ssl.cert> \\ 2 --from-file=extra_ca_cert_<name-of-certificate>.crt=ca-certificate-bundle.crt \\ 3 --from-file=ssl.key=</path/to/ssl.key> \\ 4 --dry-run=client -o yaml > custom-ssl-config-bundle-secret.yaml", "cat custom-ssl-config-bundle-secret.yaml", "apiVersion: v1 data: config.yaml: QUxMT1dfUFVMTFNfV0lUSE9VVF9TVFJJQ1RfTE9HR0lORzogZmFsc2UKQVVUSEVOVElDQVRJT05fVFlQRTogRGF0YWJhc2UKREVGQVVMVF9UQUdfRVhQSVJBVElPTjogMncKRElTVFJJQlVURURfU1R ssl.cert: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVYakNDQTBhZ0F3SUJBZ0lVTUFBRk1YVWlWVHNoMGxNTWI3U1l0eFV5eTJjd0RRWUpLb1pJaHZjTkFRRUwKQlFBd2dZZ3hDekFKQmdOVkJBWVR extra_ca_cert_<name-of-certificate>:LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVYakNDQTBhZ0F3SUJBZ0lVTUFBRk1YVWlWVHNoMGxNTWI3U1l0eFV5eTJjd0RRWUpLb1pJaHZjTkFRRUwKQlFBd2dZZ3hDe ssl.key: LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUV2UUlCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQktjd2dnU2pBZ0VBQW9JQkFRQ2c0VWxZOVV1SVJPY1oKcFhpZk9MVEdqaS9neUxQMlpiMXQ kind: Secret metadata: creationTimestamp: null name: custom-ssl-config-bundle-secret namespace: <namespace>", "oc create -n <namespace> -f custom-ssl-config-bundle-secret.yaml", "secret/custom-ssl-config-bundle-secret created", "oc patch quayregistry <registry_name> -n <namespace> --type=merge -p '{\"spec\":{\"configBundleSecret\":\"custom-ssl-config-bundle-secret\"}}'", "quayregistry.quay.redhat.com/example-registry patched", "oc patch quayregistry <registry_name> -n <namespace> --type=merge -p '{\"spec\":{\"components\":[{\"kind\":\"tls\",\"managed\":false}]}}'", "quayregistry.quay.redhat.com/example-registry patched", "oc get quayregistry <registry_name> -n <namespace> -o yaml", "configBundleSecret: custom-ssl-config-bundle-secret spec: components: - kind: tls managed: false", "openssl s_client -connect <quay-server.example.com>:443", "SSL-Session: Protocol : TLSv1.3 Cipher : TLS_AES_256_GCM_SHA384 Session-ID: 0E995850DC3A8EB1A838E2FF06CE56DBA81BD8443E7FA05895FBD6FBDE9FE737 Session-ID-ctx: Resumption PSK: 1EA68F33C65A0F0FA2655BF9C1FE906152C6E3FEEE3AEB6B1B99BA7C41F06077989352C58E07CD2FBDC363FA8A542975 PSK identity: None PSK identity hint: None SRP username: None TLS session ticket lifetime hint: 7200 (seconds)", "psql \"sslmode=verify-ca sslrootcert=<ssl_server_certificate_authority>.pem sslcert=<ssl_client_certificate>.pem sslkey=<ssl_client_key>.pem hostaddr=<database_host> port=<5432> user=<cloudsql_username> dbname=<cloudsql_database_name>\"", "touch quay-config-bundle.yaml", "oc -n <quay_namespace> create secret generic postgresql-client-certs --from-file config.yaml=<path/to/config.yaml> 1 --from-file=tls.crt=<path/to/ssl_client_certificate.pem> 2 --from-file=tls.key=<path/to/ssl_client_key.pem> 3 --from-file=ca.crt=<path/to/ssl_server_certificate.pem> 4", "DB_CONNECTION_ARGS: autorollback: true sslmode: verify-ca 1 sslrootcert: /.postgresql/root.crt 2 sslcert: /.postgresql/postgresql.crt 3 sslkey: /.postgresql/postgresql.key 4 threadlocals: true 5 DB_URI: postgresql://<dbusername>:<dbpassword>@<database_host>:<port>/<database_name>?sslmode=verify-full&sslrootcert=/.postgresql/root.crt&sslcert=/.postgresql/postgresql.crt&sslkey=/.postgresql/postgresql.key 6", "oc create -n <namespace> -f quay-config-bundle.yaml", "secret/quay-config-bundle created", "oc patch quayregistry <registry_name> -n <namespace> --type=merge -p '{\"spec\":{\"configBundleSecret\":\"quay-config-bundle\"}}'", "quayregistry.quay.redhat.com/example-registry patched", "oc get quayregistry <registry_name> -n <namespace> -o yaml", "configBundleSecret: quay-config-bundle", "cat storage.crt", "-----BEGIN CERTIFICATE----- MIIDTTCCAjWgAwIBAgIJAMVr9ngjJhzbMA0GCSqGSIb3DQEBCwUAMD0xCzAJBgNV -----END CERTIFICATE-----", "mkdir -p /path/to/quay_config_folder/extra_ca_certs", "cp storage.crt /path/to/quay_config_folder/extra_ca_certs/", "tree /path/to/quay_config_folder/extra_ca_certs", "/path/to/quay_config_folder/extra_ca_certs ├── storage.crt----", "podman ps", "CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS 5a3e82c4a75f <registry>/<repo>/quay:{productminv} \"/sbin/my_init\" 24 hours ago Up 18 hours 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp, 443/tcp grave_keller", "podman restart 5a3e82c4a75f", "podman exec -it 5a3e82c4a75f cat /etc/ssl/certs/storage.pem", "-----BEGIN CERTIFICATE----- MIIDTTCCAjWgAwIBAgIJAMVr9ngjJhzbMA0GCSqGSIb3DQEBCwUAMD0xCzAJBgNV -----END CERTIFICATE-----", "oc describe quayregistry -n <quay_namespace>", "Config Bundle Secret: example-registry-config-bundle-v123x", "oc get secret -n <quay_namespace> <example-registry-config-bundle-v123x> -o jsonpath='{.data}'", "{ \"config.yaml\": \"RkVBVFVSRV9VU0 ... MDAwMAo=\" }", "echo 'RkVBVFVSRV9VU0 ... MDAwMAo=' | base64 --decode", "FEATURE_USER_INITIALIZE: true BROWSER_API_CALLS_XHR_ONLY: false SUPER_USERS: - quayadmin FEATURE_USER_CREATION: false FEATURE_QUOTA_MANAGEMENT: true FEATURE_PROXY_CACHE: true FEATURE_BUILD_SUPPORT: true DEFAULT_SYSTEM_REJECT_QUOTA_BYTES: 102400000", "echo 'RkVBVFVSRV9VU0 ... MDAwMAo=' | base64 --decode >> config.yaml", "touch extra-ca-certificate-config-bundle-secret.yaml", "oc -n <namespace> create secret generic extra-ca-certificate-config-bundle-secret --from-file=config.yaml=</path/to/config.yaml> \\ 1 --from-file=extra_ca_cert_<name-of-certificate-one>=<path/to/certificate_one> \\ 2 --from-file=extra_ca_cert_<name-of-certificate-two>=<path/to/certificate_two> \\ 3 --from-file=extra_ca_cert_<name-of-certificate-three>=<path/to/certificate_three> \\ 4 --dry-run=client -o yaml > extra-ca-certificate-config-bundle-secret.yaml", "cat extra-ca-certificate-config-bundle-secret.yaml", "apiVersion: v1 data: config.yaml: QUxMT1dfUFVMTFNfV0lUSE9VVF9TVFJJQ1RfTE9HR0lORzogZmFsc2UKQVVUSEVOVElDQVRJT05fVFlQRTogRGF0YWJhc2UKREVGQVVMVF9UQUdfRVhQSVJBVElPTjogMncKUFJFRkVSU extra_ca_cert_certificate-one: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUQyVENDQXNHZ0F3SUJBZ0lVS2xOai90VUJBZHBkNURjYkdRQUo4anRuKzd3d0RRWUpLb1pJaHZjTkFRRUwKQlFBd2ZERUxNQWtHQ extra_ca_cert_certificate-three: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUQ0ekNDQXN1Z0F3SUJBZ0lVQmJpTXNUeExjM0s4ODNWby9GTThsWXlOS2lFd0RRWUpLb1pJaHZjTkFRRUwKQlFBd2ZERUxNQWtHQ extra_ca_cert_certificate-two: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUQ0ekNDQXN1Z0F3SUJBZ0lVVFVPTXZ2YVdFOFRYV3djYTNoWlBCTnV2QjYwd0RRWUpLb1pJaHZjTkFRRUwKQlFBd2ZERUxNQWtHQ kind: Secret metadata: creationTimestamp: null name: custom-ssl-config-bundle-secret namespace: <namespace>", "oc create -n <namespace> -f extra-ca-certificate-config-bundle-secret.yaml", "secret/extra-ca-certificate-config-bundle-secret created", "oc patch quayregistry <registry_name> -n <namespace> --type=merge -p '{\"spec\":{\"configBundleSecret\":\"extra-ca-certificate-config-bundle-secret\"}}'", "quayregistry.quay.redhat.com/example-registry patched", "oc get quayregistry <registry_name> -n <namespace> -o yaml", "configBundleSecret: extra-ca-certificate-config-bundle-secret", "cat ca.crt | base64 -w 0", "...c1psWGpqeGlPQmNEWkJPMjJ5d0pDemVnR2QNCnRsbW9JdEF4YnFSdVd3PT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=", "kubectl --namespace quay-enterprise edit secret/quay-enterprise-config-secret", "custom-cert.crt: c1psWGpqeGlPQmNEWkJPMjJ5d0pDemVnR2QNCnRsbW9JdEF4YnFSdVd3PT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=", "kubectl delete pod quay-operator.v3.7.1-6f9d859bd-p5ftc quayregistry-clair-postgres-7487f5bd86-xnxpr quayregistry-quay-app-upgrade-xq2v6 quayregistry-quay-database-859d5445ff-cqthr quayregistry-quay-redis-84f888776f-hhgms" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html-single/securing_red_hat_quay/index
7.77. hyperv-daemons
7.77. hyperv-daemons 7.77.1. RHBA-2015:1311 - hyperv-daemons bug fix update Updated hyperv-daemons packages that fix one bug are now available for Red Hat Enterprise Linux 6. The hyperv-daemons packages provide a suite of daemons that are needed when a Red Hat Enterprise Linux guest is running on Microsoft Hyper-V. The following daemons are included: - hypervkvpd, the guest Hyper-V Key-Value Pair (KVP) daemon - hypervvssd, the implementation of Hyper-V VSS functionality - hypervfcopyd, the implementation of Hyper-V file copy service functionality Bug Fix BZ# 1161368 When mounting a read-only file system that does not support file system freezing (such as SquashFS) and using the online backup feature, the online backup previously failed with an "Operation not supported" error. This update fixes the hypervvssd daemon so that it handles the online backup correctly, and the described error no longer occurs. Users of hyperv-daemons are advised to upgrade to these updated packages, which fix this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-hyperv-daemons
3.3. Setting up errata viewing with Red Hat Satellite
3.3. Setting up errata viewing with Red Hat Satellite In the Administration Portal, you can configure Red Hat Virtualization to view errata from Red Hat Satellite in the Red Hat Virtualization Manager. After you associate your hosts, virtual machines, and the Manager with a Red Hat Satellite provider, you can receive updates about available errata and their importance, and decide when to apply them. For more information about Red Hat Satellite see the Red Hat Satellite Documentation . Red Hat Virtualization 4.4 supports viewing errata with Red Hat Satellite 6.6. Prerequisites The Satellite server must be added as an external provider. The Manager, hosts, and virtual machines must all be registered in the Satellite server by their respective FQDNs. This ensures that external content host IDs do not need to be maintained in Red Hat Virtualization. The Satellite account that manages the Manager, hosts and virtual machines must have Administrator permissions and a default organization set. Note The Katello agent is deprecated and will be removed in a future Satellite version. Migrate your processes to use the remote execution feature to update clients remotely. Configuring Red Hat Virtualization errata To associate the Manager, host, and virtual machine with a Red Hat Satellite provider, complete the following tasks: Add the required Satellite server to the Manager as an external provider . Configure the required hosts to display available errata . Configure the required virtual machines to display available errata . Viewing Red Hat Virtualization Manager errata Click Administration Errata . Select the Security , Bugs , or Enhancements check boxes to view only those errata types. Additional resources Configuring Satellite Errata Management for a Host Installing the Guest Agents, Tools, and Drivers on Linux in the Virtual Machine Management Guide for Red Hat Enterprise Linux virtual machines. Installing the Guest Agents, Tools, and Drivers on Windows in the Virtual Machine Management Guide for Windows virtual machines. Viewing Host Errata Configuring Satellite errata viewing for a virtual machine in the Virtual Machine Management Guide for more information. Viewing Red Hat Satellite errata for a virtual machine in the Virtual Machine Management Guide .
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/administration_guide/chap-errata_management_with_satellite
Chapter 1. Go Toolset
Chapter 1. Go Toolset Go Toolset is a Red Hat offering for developers on Red Hat Enterprise Linux (RHEL). It provides the Go programming language tools and libraries. Note that Go is alternatively known as golang. Go Toolset is available as a module for Red Hat Enterprise Linux 8. Go Toolset is available as packages for Red Hat Enterprise Linux 9. 1.1. Go Toolset components The following components are available as a part of Go Toolset: Name Version Description golang RHEL 8 - 1.20.10, RHEL 9 - 1.20.10 A Go compiler. delve RHEL 8 - 1.20.2, RHEL 9 - 1.20.2 A Go debugger. 1.2. Go Toolset compatibility Go Toolset is available for Red Hat Enterprise Linux 8 and Red Hat Enterprise Linux 9 on the following architectures: AMD and Intel 64-bit 64-bit ARM IBM Power Systems, Little Endian 64-bit IBM Z 1.3. Installing Go Toolset Complete the following steps to install Go Toolset, including all dependent packages. Prerequisites All available Red Hat Enterprise Linux updates are installed. Procedure On Red Hat Enterprise Linux 8, install the go-toolset module by running: On Red Hat Enterprise Linux 9, install the go-toolset package by running: 1.4. Installing Go documentation You can install documentation for the Go programming language on your local system. Prerequisites Go Toolset is installed. For more information, see Installing Go Toolset . Procedure To install the golang-docs package, run the following command: On Red Hat Enterprise Linux 8: You can find the documentation under the following path: /usr/lib/golang/doc/go_spec.html . On Red Hat Enterprise Linux 9: You can find the documentation under the following path: /usr/lib/golang/doc/go_spec.html . 1.5. Additional resources For more information on the Go programming language, tools, and libraries, see the official Go documentation .
[ "yum module install go-toolset", "dnf install go-toolset", "yum install golang-docs", "dnf install golang-docs" ]
https://docs.redhat.com/en/documentation/red_hat_developer_tools/1/html/using_go_1.20.10_toolset/assembly_go-toolset_using-go-toolset