title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
sequencelengths
1
5.62k
url
stringlengths
79
342
8.3.6. Viewing Scan Results and Generating Scan Reports
8.3.6. Viewing Scan Results and Generating Scan Reports After the system scan is finished, three new buttons, Clear , Save Results , and Show Report , will appear instead of the Scan button. Warning Clicking the Clear button permanently removes the scan results. To store the scan results in the form of an XCCDF, ARF, or HTML file, click the Save Results combo box. Choose the HTML Report option to generate the scan report in human-readable form. The XCCDF and ARF (data stream) formats are suitable for further automatic processing. You can repeatedly choose all three options. If you prefer to view the scan results immediately without saving them, you can click the Show Report button, which opens the scan results in the form of a temporary HTML file in your default web browser.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sect-generating_reports_on_workbench
Chapter 2. Container security
Chapter 2. Container security 2.1. Understanding container security Securing a containerized application relies on multiple levels of security: Container security begins with a trusted base container image and continues through the container build process as it moves through your CI/CD pipeline. Important Image streams by default do not automatically update. This default behavior might create a security issue because security updates to images referenced by an image stream do not automatically occur. For information about how to override this default behavior, see Configuring periodic importing of imagestreamtags . When a container is deployed, its security depends on it running on secure operating systems and networks, and establishing firm boundaries between the container itself and the users and hosts that interact with it. Continued security relies on being able to scan container images for vulnerabilities and having an efficient way to correct and replace vulnerable images. Beyond what a platform such as OpenShift Container Platform offers out of the box, your organization will likely have its own security demands. Some level of compliance verification might be needed before you can even bring OpenShift Container Platform into your data center. Likewise, you may need to add your own agents, specialized hardware drivers, or encryption features to OpenShift Container Platform, before it can meet your organization's security standards. This guide provides a high-level walkthrough of the container security measures available in OpenShift Container Platform, including solutions for the host layer, the container and orchestration layer, and the build and application layer. It then points you to specific OpenShift Container Platform documentation to help you achieve those security measures. This guide contains the following information: Why container security is important and how it compares with existing security standards. Which container security measures are provided by the host (RHCOS and RHEL) layer and which are provided by OpenShift Container Platform. How to evaluate your container content and sources for vulnerabilities. How to design your build and deployment process to proactively check container content. How to control access to containers through authentication and authorization. How networking and attached storage are secured in OpenShift Container Platform. Containerized solutions for API management and SSO. The goal of this guide is to understand the incredible security benefits of using OpenShift Container Platform for your containerized workloads and how the entire Red Hat ecosystem plays a part in making and keeping containers secure. It will also help you understand how you can engage with the OpenShift Container Platform to achieve your organization's security goals. 2.1.1. What are containers? Containers package an application and all its dependencies into a single image that can be promoted from development, to test, to production, without change. A container might be part of a larger application that works closely with other containers. Containers provide consistency across environments and multiple deployment targets: physical servers, virtual machines (VMs), and private or public cloud. Some of the benefits of using containers include: Infrastructure Applications Sandboxed application processes on a shared Linux operating system kernel Package my application and all of its dependencies Simpler, lighter, and denser than virtual machines Deploy to any environment in seconds and enable CI/CD Portable across different environments Easily access and share containerized components See Understanding Linux containers from the Red Hat Customer Portal to find out more about Linux containers. To learn about RHEL container tools, see Building, running, and managing containers in the RHEL product documentation. 2.1.2. What is OpenShift Container Platform? Automating how containerized applications are deployed, run, and managed is the job of a platform such as OpenShift Container Platform. At its core, OpenShift Container Platform relies on the Kubernetes project to provide the engine for orchestrating containers across many nodes in scalable data centers. Kubernetes is a project, which can run using different operating systems and add-on components that offer no guarantees of supportability from the project. As a result, the security of different Kubernetes platforms can vary. OpenShift Container Platform is designed to lock down Kubernetes security and integrate the platform with a variety of extended components. To do this, OpenShift Container Platform draws on the extensive Red Hat ecosystem of open source technologies that include the operating systems, authentication, storage, networking, development tools, base container images, and many other components. OpenShift Container Platform can leverage Red Hat's experience in uncovering and rapidly deploying fixes for vulnerabilities in the platform itself as well as the containerized applications running on the platform. Red Hat's experience also extends to efficiently integrating new components with OpenShift Container Platform as they become available and adapting technologies to individual customer needs. Additional resources OpenShift Container Platform architecture OpenShift Security Guide 2.2. Understanding host and VM security Both containers and virtual machines provide ways of separating applications running on a host from the operating system itself. Understanding RHCOS, which is the operating system used by OpenShift Container Platform, will help you see how the host systems protect containers and hosts from each other. 2.2.1. Securing containers on Red Hat Enterprise Linux CoreOS (RHCOS) Containers simplify the act of deploying many applications to run on the same host, using the same kernel and container runtime to spin up each container. The applications can be owned by many users and, because they are kept separate, can run different, and even incompatible, versions of those applications at the same time without issue. In Linux, containers are just a special type of process, so securing containers is similar in many ways to securing any other running process. An environment for running containers starts with an operating system that can secure the host kernel from containers and other processes running on the host, as well as secure containers from each other. Because OpenShift Container Platform 4.10 runs on RHCOS hosts, with the option of using Red Hat Enterprise Linux (RHEL) as worker nodes, the following concepts apply by default to any deployed OpenShift Container Platform cluster. These RHEL security features are at the core of what makes running containers in OpenShift Container Platform more secure: Linux namespaces enable creating an abstraction of a particular global system resource to make it appear as a separate instance to processes within a namespace. Consequently, several containers can use the same computing resource simultaneously without creating a conflict. Container namespaces that are separate from the host by default include mount table, process table, network interface, user, control group, UTS, and IPC namespaces. Those containers that need direct access to host namespaces need to have elevated permissions to request that access. See Overview of Containers in Red Hat Systems from the RHEL 8 container documentation for details on the types of namespaces. SELinux provides an additional layer of security to keep containers isolated from each other and from the host. SELinux allows administrators to enforce mandatory access controls (MAC) for every user, application, process, and file. Warning Disabling SELinux on RHCOS is not supported. CGroups (control groups) limit, account for, and isolate the resource usage (CPU, memory, disk I/O, network, etc.) of a collection of processes. CGroups are used to ensure that containers on the same host are not impacted by each other. Secure computing mode (seccomp) profiles can be associated with a container to restrict available system calls. See page 94 of the OpenShift Security Guide for details about seccomp. Deploying containers using RHCOS reduces the attack surface by minimizing the host environment and tuning it for containers. The CRI-O container engine further reduces that attack surface by implementing only those features required by Kubernetes and OpenShift Container Platform to run and manage containers, as opposed to other container engines that implement desktop-oriented standalone features. RHCOS is a version of Red Hat Enterprise Linux (RHEL) that is specially configured to work as control plane (master) and worker nodes on OpenShift Container Platform clusters. So RHCOS is tuned to efficiently run container workloads, along with Kubernetes and OpenShift Container Platform services. To further protect RHCOS systems in OpenShift Container Platform clusters, most containers, except those managing or monitoring the host system itself, should run as a non-root user. Dropping the privilege level or creating containers with the least amount of privileges possible is recommended best practice for protecting your own OpenShift Container Platform clusters. Additional resources How nodes enforce resource constraints Managing security context constraints Supported platforms for OpenShift clusters Requirements for a cluster with user-provisioned infrastructure Choosing how to configure RHCOS Ignition Kernel arguments Kernel modules FIPS cryptography Disk encryption Chrony time service About the OpenShift Update Service 2.2.2. Comparing virtualization and containers Traditional virtualization provides another way to keep application environments separate on the same physical host. However, virtual machines work in a different way than containers. Virtualization relies on a hypervisor spinning up guest virtual machines (VMs), each of which has its own operating system (OS), represented by a running kernel, as well as the running application and its dependencies. With VMs, the hypervisor isolates the guests from each other and from the host kernel. Fewer individuals and processes have access to the hypervisor, reducing the attack surface on the physical server. That said, security must still be monitored: one guest VM might be able to use hypervisor bugs to gain access to another VM or the host kernel. And, when the OS needs to be patched, it must be patched on all guest VMs using that OS. Containers can be run inside guest VMs, and there might be use cases where this is desirable. For example, you might be deploying a traditional application in a container, perhaps to lift-and-shift an application to the cloud. Container separation on a single host, however, provides a more lightweight, flexible, and easier-to-scale deployment solution. This deployment model is particularly appropriate for cloud-native applications. Containers are generally much smaller than VMs and consume less memory and CPU. See Linux Containers Compared to KVM Virtualization in the RHEL 7 container documentation to learn about the differences between container and VMs. 2.2.3. Securing OpenShift Container Platform When you deploy OpenShift Container Platform, you have the choice of an installer-provisioned infrastructure (there are several available platforms) or your own user-provisioned infrastructure. Some low-level security-related configuration, such as enabling FIPS compliance or adding kernel modules required at first boot, might benefit from a user-provisioned infrastructure. Likewise, user-provisioned infrastructure is appropriate for disconnected OpenShift Container Platform deployments. Keep in mind that, when it comes to making security enhancements and other configuration changes to OpenShift Container Platform, the goals should include: Keeping the underlying nodes as generic as possible. You want to be able to easily throw away and spin up similar nodes quickly and in prescriptive ways. Managing modifications to nodes through OpenShift Container Platform as much as possible, rather than making direct, one-off changes to the nodes. In pursuit of those goals, most node changes should be done during installation through Ignition or later using MachineConfigs that are applied to sets of nodes by the Machine Config Operator. Examples of security-related configuration changes you can do in this way include: Adding kernel arguments Adding kernel modules Enabling support for FIPS cryptography Configuring disk encryption Configuring the chrony time service Besides the Machine Config Operator, there are several other Operators available to configure OpenShift Container Platform infrastructure that are managed by the Cluster Version Operator (CVO). The CVO is able to automate many aspects of OpenShift Container Platform cluster updates. 2.3. Hardening RHCOS RHCOS was created and tuned to be deployed in OpenShift Container Platform with few if any changes needed to RHCOS nodes. Every organization adopting OpenShift Container Platform has its own requirements for system hardening. As a RHEL system with OpenShift-specific modifications and features added (such as Ignition, ostree, and a read-only /usr to provide limited immutability), RHCOS can be hardened just as you would any RHEL system. Differences lie in the ways you manage the hardening. A key feature of OpenShift Container Platform and its Kubernetes engine is to be able to quickly scale applications and infrastructure up and down as needed. Unless it is unavoidable, you do not want to make direct changes to RHCOS by logging into a host and adding software or changing settings. You want to have the OpenShift Container Platform installer and control plane manage changes to RHCOS so new nodes can be spun up without manual intervention. So, if you are setting out to harden RHCOS nodes in OpenShift Container Platform to meet your security needs, you should consider both what to harden and how to go about doing that hardening. 2.3.1. Choosing what to harden in RHCOS The RHEL 8 Security Hardening guide describes how you should approach security for any RHEL system. Use this guide to learn how to approach cryptography, evaluate vulnerabilities, and assess threats to various services. Likewise, you can learn how to scan for compliance standards, check file integrity, perform auditing, and encrypt storage devices. With the knowledge of what features you want to harden, you can then decide how to harden them in RHCOS. 2.3.2. Choosing how to harden RHCOS Direct modification of RHCOS systems in OpenShift Container Platform is discouraged. Instead, you should think of modifying systems in pools of nodes, such as worker nodes and control plane nodes. When a new node is needed, in non-bare metal installs, you can request a new node of the type you want and it will be created from an RHCOS image plus the modifications you created earlier. There are opportunities for modifying RHCOS before installation, during installation, and after the cluster is up and running. 2.3.2.1. Hardening before installation For bare metal installations, you can add hardening features to RHCOS before beginning the OpenShift Container Platform installation. For example, you can add kernel options when you boot the RHCOS installer to turn security features on or off, such as various SELinux booleans or low-level settings, such as symmetric multithreading. Warning Disabling SELinux on RHCOS nodes is not supported. Although bare metal RHCOS installations are more difficult, they offer the opportunity of getting operating system changes in place before starting the OpenShift Container Platform installation. This can be important when you need to ensure that certain features, such as disk encryption or special networking settings, be set up at the earliest possible moment. 2.3.2.2. Hardening during installation You can interrupt the OpenShift Container Platform installation process and change Ignition configs. Through Ignition configs, you can add your own files and systemd services to the RHCOS nodes. You can also make some basic security-related changes to the install-config.yaml file used for installation. Contents added in this way are available at each node's first boot. 2.3.2.3. Hardening after the cluster is running After the OpenShift Container Platform cluster is up and running, there are several ways to apply hardening features to RHCOS: Daemon set: If you need a service to run on every node, you can add that service with a Kubernetes DaemonSet object . Machine config: MachineConfig objects contain a subset of Ignition configs in the same format. By applying machine configs to all worker or control plane nodes, you can ensure that the node of the same type that is added to the cluster has the same changes applied. All of the features noted here are described in the OpenShift Container Platform product documentation. Additional resources OpenShift Security Guide Choosing how to configure RHCOS Modifying Nodes Manually creating the installation configuration file Creating the Kubernetes manifest and Ignition config files Installing RHCOS by using an ISO image Customizing nodes Adding kernel arguments to Nodes Installation configuration parameters - see fips Support for FIPS cryptography RHEL core crypto components 2.4. Container image signatures Red Hat delivers signatures for the images in the Red Hat Container Registries. Those signatures can be automatically verified when being pulled to OpenShift Container Platform 4 clusters by using the Machine Config Operator (MCO). Quay.io serves most of the images that make up OpenShift Container Platform, and only the release image is signed. Release images refer to the approved OpenShift Container Platform images, offering a degree of protection against supply chain attacks. However, some extensions to OpenShift Container Platform, such as logging, monitoring, and service mesh, are shipped as Operators from the Operator Lifecycle Manager (OLM). Those images ship from the Red Hat Ecosystem Catalog Container images registry. To verify the integrity of those images between Red Hat registries and your infrastructure, enable signature verification. 2.4.1. Enabling signature verification for Red Hat Container Registries Enabling container signature validation for Red Hat Container Registries requires writing a signature verification policy file specifying the keys to verify images from these registries. For RHEL8 nodes, the registries are already defined in /etc/containers/registries.d by default. Procedure Create a Butane config file, 51-worker-rh-registry-trust.bu , containing the necessary configuration for the worker nodes. Note See "Creating machine configs with Butane" for information about Butane. variant: openshift version: 4.10.0 metadata: name: 51-worker-rh-registry-trust labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/containers/policy.json mode: 0644 overwrite: true contents: inline: | { "default": [ { "type": "insecureAcceptAnything" } ], "transports": { "docker": { "registry.access.redhat.com": [ { "type": "signedBy", "keyType": "GPGKeys", "keyPath": "/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release" } ], "registry.redhat.io": [ { "type": "signedBy", "keyType": "GPGKeys", "keyPath": "/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release" } ] }, "docker-daemon": { "": [ { "type": "insecureAcceptAnything" } ] } } } Use Butane to generate a machine config YAML file, 51-worker-rh-registry-trust.yaml , containing the file to be written to disk on the worker nodes: USD butane 51-worker-rh-registry-trust.bu -o 51-worker-rh-registry-trust.yaml Apply the created machine config: USD oc apply -f 51-worker-rh-registry-trust.yaml Check that the worker machine config pool has rolled out with the new machine config: Check that the new machine config was created: USD oc get mc Sample output NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 00-worker a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-master-container-runtime a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-master-kubelet a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-worker-container-runtime a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-worker-kubelet a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 51-master-rh-registry-trust 3.2.0 13s 51-worker-rh-registry-trust 3.2.0 53s 1 99-master-generated-crio-seccomp-use-default 3.2.0 25m 99-master-generated-registries a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 99-master-ssh 3.2.0 28m 99-worker-generated-crio-seccomp-use-default 3.2.0 25m 99-worker-generated-registries a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 99-worker-ssh 3.2.0 28m rendered-master-af1e7ff78da0a9c851bab4be2777773b a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 8s rendered-master-cd51fd0c47e91812bfef2765c52ec7e6 a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 24m rendered-worker-2b52f75684fbc711bd1652dd86fd0b82 a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 24m rendered-worker-be3b3bce4f4aa52a62902304bac9da3c a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 48s 2 1 New machine config 2 New rendered machine config Check that the worker machine config pool is updating with the new machine config: USD oc get mcp Sample output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-af1e7ff78da0a9c851bab4be2777773b True False False 3 3 3 0 30m worker rendered-worker-be3b3bce4f4aa52a62902304bac9da3c False True False 3 0 0 0 30m 1 1 When the UPDATING field is True , the machine config pool is updating with the new machine config. When the field becomes False , the worker machine config pool has rolled out to the new machine config. If your cluster uses any RHEL7 worker nodes, when the worker machine config pool is updated, create YAML files on those nodes in the /etc/containers/registries.d directory, which specify the location of the detached signatures for a given registry server. The following example works only for images hosted in registry.access.redhat.com and registry.redhat.io . Start a debug session to each RHEL7 worker node: USD oc debug node/<node_name> Change your root directory to /host : sh-4.2# chroot /host Create a /etc/containers/registries.d/registry.redhat.io.yaml file that contains the following: docker: registry.redhat.io: sigstore: https://registry.redhat.io/containers/sigstore Create a /etc/containers/registries.d/registry.access.redhat.com.yaml file that contains the following: docker: registry.access.redhat.com: sigstore: https://access.redhat.com/webassets/docker/content/sigstore Exit the debug session. 2.4.2. Verifying the signature verification configuration After you apply the machine configs to the cluster, the Machine Config Controller detects the new MachineConfig object and generates a new rendered-worker-<hash> version. Prerequisites You enabled signature verification by using a machine config file. Procedure On the command line, run the following command to display information about a desired worker: USD oc describe machineconfigpool/worker Example output of initial worker monitoring Name: worker Namespace: Labels: machineconfiguration.openshift.io/mco-built-in= Annotations: <none> API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfigPool Metadata: Creation Timestamp: 2019-12-19T02:02:12Z Generation: 3 Resource Version: 16229 Self Link: /apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker UID: 92697796-2203-11ea-b48c-fa163e3940e5 Spec: Configuration: Name: rendered-worker-f6819366eb455a401c42f8d96ab25c02 Source: API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 00-worker API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-container-runtime API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-kubelet API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 51-worker-rh-registry-trust API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-92697796-2203-11ea-b48c-fa163e3940e5-registries API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-ssh Machine Config Selector: Match Labels: machineconfiguration.openshift.io/role: worker Node Selector: Match Labels: node-role.kubernetes.io/worker: Paused: false Status: Conditions: Last Transition Time: 2019-12-19T02:03:27Z Message: Reason: Status: False Type: RenderDegraded Last Transition Time: 2019-12-19T02:03:43Z Message: Reason: Status: False Type: NodeDegraded Last Transition Time: 2019-12-19T02:03:43Z Message: Reason: Status: False Type: Degraded Last Transition Time: 2019-12-19T02:28:23Z Message: Reason: Status: False Type: Updated Last Transition Time: 2019-12-19T02:28:23Z Message: All nodes are updating to rendered-worker-f6819366eb455a401c42f8d96ab25c02 Reason: Status: True Type: Updating Configuration: Name: rendered-worker-d9b3f4ffcfd65c30dcf591a0e8cf9b2e Source: API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 00-worker API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-container-runtime API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-kubelet API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-92697796-2203-11ea-b48c-fa163e3940e5-registries API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-ssh Degraded Machine Count: 0 Machine Count: 1 Observed Generation: 3 Ready Machine Count: 0 Unavailable Machine Count: 1 Updated Machine Count: 0 Events: <none> Run the oc describe command again: USD oc describe machineconfigpool/worker Example output after the worker is updated ... Last Transition Time: 2019-12-19T04:53:09Z Message: All nodes are updated with rendered-worker-f6819366eb455a401c42f8d96ab25c02 Reason: Status: True Type: Updated Last Transition Time: 2019-12-19T04:53:09Z Message: Reason: Status: False Type: Updating Configuration: Name: rendered-worker-f6819366eb455a401c42f8d96ab25c02 Source: API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 00-worker API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-container-runtime API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-kubelet API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 51-worker-rh-registry-trust API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-92697796-2203-11ea-b48c-fa163e3940e5-registries API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-ssh Degraded Machine Count: 0 Machine Count: 3 Observed Generation: 4 Ready Machine Count: 3 Unavailable Machine Count: 0 Updated Machine Count: 3 ... Note The Observed Generation parameter shows an increased count based on the generation of the controller-produced configuration. This controller updates this value even if it fails to process the specification and generate a revision. The Configuration Source value points to the 51-worker-rh-registry-trust configuration. Confirm that the policy.json file exists with the following command: USD oc debug node/<node> -- chroot /host cat /etc/containers/policy.json Example output Starting pod/<node>-debug ... To use host binaries, run `chroot /host` { "default": [ { "type": "insecureAcceptAnything" } ], "transports": { "docker": { "registry.access.redhat.com": [ { "type": "signedBy", "keyType": "GPGKeys", "keyPath": "/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release" } ], "registry.redhat.io": [ { "type": "signedBy", "keyType": "GPGKeys", "keyPath": "/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release" } ] }, "docker-daemon": { "": [ { "type": "insecureAcceptAnything" } ] } } } Confirm that the registry.redhat.io.yaml file exists with the following command: USD oc debug node/<node> -- chroot /host cat /etc/containers/registries.d/registry.redhat.io.yaml Example output Starting pod/<node>-debug ... To use host binaries, run `chroot /host` docker: registry.redhat.io: sigstore: https://registry.redhat.io/containers/sigstore Confirm that the registry.access.redhat.com.yaml file exists with the following command: USD oc debug node/<node> -- chroot /host cat /etc/containers/registries.d/registry.access.redhat.com.yaml Example output Starting pod/<node>-debug ... To use host binaries, run `chroot /host` docker: registry.access.redhat.com: sigstore: https://access.redhat.com/webassets/docker/content/sigstore 2.4.3. Additional resources Machine Config Overview 2.5. Understanding compliance For many OpenShift Container Platform customers, regulatory readiness, or compliance, on some level is required before any systems can be put into production. That regulatory readiness can be imposed by national standards, industry standards or the organization's corporate governance framework. 2.5.1. Understanding compliance and risk management FIPS compliance is one of the most critical components required in highly secure environments, to ensure that only supported cryptographic technologies are allowed on nodes. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. To understand Red Hat's view of OpenShift Container Platform compliance frameworks, refer to the Risk Management and Regulatory Readiness chapter of the OpenShift Security Guide Book . Additional resources Installing a cluster in FIPS mode 2.6. Securing container content To ensure the security of the content inside your containers you need to start with trusted base images, such as Red Hat Universal Base Images, and add trusted software. To check the ongoing security of your container images, there are both Red Hat and third-party tools for scanning images. 2.6.1. Securing inside the container Applications and infrastructures are composed of readily available components, many of which are open source packages such as, the Linux operating system, JBoss Web Server, PostgreSQL, and Node.js. Containerized versions of these packages are also available. However, you need to know where the packages originally came from, what versions are used, who built them, and whether there is any malicious code inside them. Some questions to answer include: Will what is inside the containers compromise your infrastructure? Are there known vulnerabilities in the application layer? Are the runtime and operating system layers current? By building your containers from Red Hat Universal Base Images (UBI) you are assured of a foundation for your container images that consists of the same RPM-packaged software that is included in Red Hat Enterprise Linux. No subscriptions are required to either use or redistribute UBI images. To assure ongoing security of the containers themselves, security scanning features, used directly from RHEL or added to OpenShift Container Platform, can alert you when an image you are using has vulnerabilities. OpenSCAP image scanning is available in RHEL and the Red Hat Quay Container Security Operator can be added to check container images used in OpenShift Container Platform. 2.6.2. Creating redistributable images with UBI To create containerized applications, you typically start with a trusted base image that offers the components that are usually provided by the operating system. These include the libraries, utilities, and other features the application expects to see in the operating system's file system. Red Hat Universal Base Images (UBI) were created to encourage anyone building their own containers to start with one that is made entirely from Red Hat Enterprise Linux rpm packages and other content. These UBI images are updated regularly to keep up with security patches and free to use and redistribute with container images built to include your own software. Search the Red Hat Ecosystem Catalog to both find and check the health of different UBI images. As someone creating secure container images, you might be interested in these two general types of UBI images: UBI : There are standard UBI images for RHEL 7 and 8 ( ubi7/ubi and ubi8/ubi ), as well as minimal images based on those systems ( ubi7/ubi-minimal and ubi8/ubi-mimimal ). All of these images are preconfigured to point to free repositories of RHEL software that you can add to the container images you build, using standard yum and dnf commands. Red Hat encourages people to use these images on other distributions, such as Fedora and Ubuntu. Red Hat Software Collections : Search the Red Hat Ecosystem Catalog for rhscl/ to find images created to use as base images for specific types of applications. For example, there are Apache httpd ( rhscl/httpd-* ), Python ( rhscl/python-* ), Ruby ( rhscl/ruby-* ), Node.js ( rhscl/nodejs-* ) and Perl ( rhscl/perl-* ) rhscl images. Keep in mind that while UBI images are freely available and redistributable, Red Hat support for these images is only available through Red Hat product subscriptions. See Using Red Hat Universal Base Images in the Red Hat Enterprise Linux documentation for information on how to use and build on standard, minimal and init UBI images. 2.6.3. Security scanning in RHEL For Red Hat Enterprise Linux (RHEL) systems, OpenSCAP scanning is available from the openscap-utils package. In RHEL, you can use the openscap-podman command to scan images for vulnerabilities. See Scanning containers and container images for vulnerabilities in the Red Hat Enterprise Linux documentation. OpenShift Container Platform enables you to leverage RHEL scanners with your CI/CD process. For example, you can integrate static code analysis tools that test for security flaws in your source code and software composition analysis tools that identify open source libraries to provide metadata on those libraries such as known vulnerabilities. 2.6.3.1. Scanning OpenShift images For the container images that are running in OpenShift Container Platform and are pulled from Red Hat Quay registries, you can use an Operator to list the vulnerabilities of those images. The Red Hat Quay Container Security Operator can be added to OpenShift Container Platform to provide vulnerability reporting for images added to selected namespaces. Container image scanning for Red Hat Quay is performed by the Clair security scanner . In Red Hat Quay, Clair can search for and report vulnerabilities in images built from RHEL, CentOS, Oracle, Alpine, Debian, and Ubuntu operating system software. 2.6.4. Integrating external scanning OpenShift Container Platform makes use of object annotations to extend functionality. External tools, such as vulnerability scanners, can annotate image objects with metadata to summarize results and control pod execution. This section describes the recognized format of this annotation so it can be reliably used in consoles to display useful data to users. 2.6.4.1. Image metadata There are different types of image quality data, including package vulnerabilities and open source software (OSS) license compliance. Additionally, there may be more than one provider of this metadata. To that end, the following annotation format has been reserved: Table 2.1. Annotation key format Component Description Acceptable values qualityType Metadata type vulnerability license operations policy providerId Provider ID string openscap redhatcatalog redhatinsights blackduck jfrog 2.6.4.1.1. Example annotation keys The value of the image quality annotation is structured data that must adhere to the following format: Table 2.2. Annotation value format Field Required? Description Type name Yes Provider display name String timestamp Yes Scan timestamp String description No Short description String reference Yes URL of information source or more details. Required so user may validate the data. String scannerVersion No Scanner version String compliant No Compliance pass or fail Boolean summary No Summary of issues found List (see table below) The summary field must adhere to the following format: Table 2.3. Summary field value format Field Description Type label Display label for component (for example, "critical," "important," "moderate," "low," or "health") String data Data for this component (for example, count of vulnerabilities found or score) String severityIndex Component index allowing for ordering and assigning graphical representation. The value is range 0..3 where 0 = low. Integer reference URL of information source or more details. Optional. String 2.6.4.1.2. Example annotation values This example shows an OpenSCAP annotation for an image with vulnerability summary data and a compliance boolean: OpenSCAP annotation { "name": "OpenSCAP", "description": "OpenSCAP vulnerability score", "timestamp": "2016-09-08T05:04:46Z", "reference": "https://www.open-scap.org/930492", "compliant": true, "scannerVersion": "1.2", "summary": [ { "label": "critical", "data": "4", "severityIndex": 3, "reference": null }, { "label": "important", "data": "12", "severityIndex": 2, "reference": null }, { "label": "moderate", "data": "8", "severityIndex": 1, "reference": null }, { "label": "low", "data": "26", "severityIndex": 0, "reference": null } ] } This example shows the Container images section of the Red Hat Ecosystem Catalog annotation for an image with health index data with an external URL for additional details: Red Hat Ecosystem Catalog annotation { "name": "Red Hat Ecosystem Catalog", "description": "Container health index", "timestamp": "2016-09-08T05:04:46Z", "reference": "https://access.redhat.com/errata/RHBA-2016:1566", "compliant": null, "scannerVersion": "1.2", "summary": [ { "label": "Health index", "data": "B", "severityIndex": 1, "reference": null } ] } 2.6.4.2. Annotating image objects While image stream objects are what an end user of OpenShift Container Platform operates against, image objects are annotated with security metadata. Image objects are cluster-scoped, pointing to a single image that may be referenced by many image streams and tags. 2.6.4.2.1. Example annotate CLI command Replace <image> with an image digest, for example sha256:401e359e0f45bfdcf004e258b72e253fd07fba8cc5c6f2ed4f4608fb119ecc2 : USD oc annotate image <image> \ quality.images.openshift.io/vulnerability.redhatcatalog='{ \ "name": "Red Hat Ecosystem Catalog", \ "description": "Container health index", \ "timestamp": "2020-06-01T05:04:46Z", \ "compliant": null, \ "scannerVersion": "1.2", \ "reference": "https://access.redhat.com/errata/RHBA-2020:2347", \ "summary": "[ \ { "label": "Health index", "data": "B", "severityIndex": 1, "reference": null } ]" }' 2.6.4.3. Controlling pod execution Use the images.openshift.io/deny-execution image policy to programmatically control if an image can be run. 2.6.4.3.1. Example annotation annotations: images.openshift.io/deny-execution: true 2.6.4.4. Integration reference In most cases, external tools such as vulnerability scanners develop a script or plugin that watches for image updates, performs scanning, and annotates the associated image object with the results. Typically this automation calls the OpenShift Container Platform 4.10 REST APIs to write the annotation. See OpenShift Container Platform REST APIs for general information on the REST APIs. 2.6.4.4.1. Example REST API call The following example call using curl overrides the value of the annotation. Be sure to replace the values for <token> , <openshift_server> , <image_id> , and <image_annotation> . Patch API call USD curl -X PATCH \ -H "Authorization: Bearer <token>" \ -H "Content-Type: application/merge-patch+json" \ https://<openshift_server>:6443/apis/image.openshift.io/v1/images/<image_id> \ --data '{ <image_annotation> }' The following is an example of PATCH payload data: Patch call data { "metadata": { "annotations": { "quality.images.openshift.io/vulnerability.redhatcatalog": "{ 'name': 'Red Hat Ecosystem Catalog', 'description': 'Container health index', 'timestamp': '2020-06-01T05:04:46Z', 'compliant': null, 'reference': 'https://access.redhat.com/errata/RHBA-2020:2347', 'summary': [{'label': 'Health index', 'data': '4', 'severityIndex': 1, 'reference': null}] }" } } } Additional resources Image stream objects 2.7. Using container registries securely Container registries store container images to: Make images accessible to others Organize images into repositories that can include multiple versions of an image Optionally limit access to images, based on different authentication methods, or make them publicly available There are public container registries, such as Quay.io and Docker Hub where many people and organizations share their images. The Red Hat Registry offers supported Red Hat and partner images, while the Red Hat Ecosystem Catalog offers detailed descriptions and health checks for those images. To manage your own registry, you could purchase a container registry such as Red Hat Quay . From a security standpoint, some registries provide special features to check and improve the health of your containers. For example, Red Hat Quay offers container vulnerability scanning with Clair security scanner, build triggers to automatically rebuild images when source code changes in GitHub and other locations, and the ability to use role-based access control (RBAC) to secure access to images. 2.7.1. Knowing where containers come from? There are tools you can use to scan and track the contents of your downloaded and deployed container images. However, there are many public sources of container images. When using public container registries, you can add a layer of protection by using trusted sources. 2.7.2. Immutable and certified containers Consuming security updates is particularly important when managing immutable containers . Immutable containers are containers that will never be changed while running. When you deploy immutable containers, you do not step into the running container to replace one or more binaries. From an operational standpoint, you rebuild and redeploy an updated container image to replace a container instead of changing it. Red Hat certified images are: Free of known vulnerabilities in the platform components or layers Compatible across the RHEL platforms, from bare metal to cloud Supported by Red Hat The list of known vulnerabilities is constantly evolving, so you must track the contents of your deployed container images, as well as newly downloaded images, over time. You can use Red Hat Security Advisories (RHSAs) to alert you to any newly discovered issues in Red Hat certified container images, and direct you to the updated image. Alternatively, you can go to the Red Hat Ecosystem Catalog to look up that and other security-related issues for each Red Hat image. 2.7.3. Getting containers from Red Hat Registry and Ecosystem Catalog Red Hat lists certified container images for Red Hat products and partner offerings from the Container Images section of the Red Hat Ecosystem Catalog. From that catalog, you can see details of each image, including CVE, software packages listings, and health scores. Red Hat images are actually stored in what is referred to as the Red Hat Registry , which is represented by a public container registry ( registry.access.redhat.com ) and an authenticated registry ( registry.redhat.io ). Both include basically the same set of container images, with registry.redhat.io including some additional images that require authentication with Red Hat subscription credentials. Container content is monitored for vulnerabilities by Red Hat and updated regularly. When Red Hat releases security updates, such as fixes to glibc , DROWN , or Dirty Cow , any affected container images are also rebuilt and pushed to the Red Hat Registry. Red Hat uses a health index to reflect the security risk for each container provided through the Red Hat Ecosystem Catalog. Because containers consume software provided by Red Hat and the errata process, old, stale containers are insecure whereas new, fresh containers are more secure. To illustrate the age of containers, the Red Hat Ecosystem Catalog uses a grading system. A freshness grade is a measure of the oldest and most severe security errata available for an image. "A" is more up to date than "F". See Container Health Index grades as used inside the Red Hat Ecosystem Catalog for more details on this grading system. See the Red Hat Product Security Center for details on security updates and vulnerabilities related to Red Hat software. Check out Red Hat Security Advisories to search for specific advisories and CVEs. 2.7.4. OpenShift Container Registry OpenShift Container Platform includes the OpenShift Container Registry , a private registry running as an integrated component of the platform that you can use to manage your container images. The OpenShift Container Registry provides role-based access controls that allow you to manage who can pull and push which container images. OpenShift Container Platform also supports integration with other private registries that you might already be using, such as Red Hat Quay. Additional resources Integrated OpenShift image registry 2.7.5. Storing containers using Red Hat Quay Red Hat Quay is an enterprise-quality container registry product from Red Hat. Development for Red Hat Quay is done through the upstream Project Quay . Red Hat Quay is available to deploy on-premise or through the hosted version of Red Hat Quay at Quay.io . Security-related features of Red Hat Quay include: Time machine : Allows images with older tags to expire after a set period of time or based on a user-selected expiration time. Repository mirroring : Lets you mirror other registries for security reasons, such hosting a public repository on Red Hat Quay behind a company firewall, or for performance reasons, to keep registries closer to where they are used. Action log storage : Save Red Hat Quay logging output to Elasticsearch storage to allow for later search and analysis. Clair security scanning : Scan images against a variety of Linux vulnerability databases, based on the origins of each container image. Internal authentication : Use the default local database to handle RBAC authentication to Red Hat Quay or choose from LDAP, Keystone (OpenStack), JWT Custom Authentication, or External Application Token authentication. External authorization (OAuth) : Allow authorization to Red Hat Quay from GitHub, GitHub Enterprise, or Google Authentication. Access settings : Generate tokens to allow access to Red Hat Quay from docker, rkt, anonymous access, user-created accounts, encrypted client passwords, or prefix username autocompletion. Ongoing integration of Red Hat Quay with OpenShift Container Platform continues, with several OpenShift Container Platform Operators of particular interest. The Quay Bridge Operator lets you replace the internal OpenShift image registry with Red Hat Quay. The Red Hat Quay Container Security Operator lets you check vulnerabilities of images running in OpenShift Container Platform that were pulled from Red Hat Quay registries. 2.8. Securing the build process In a container environment, the software build process is the stage in the life cycle where application code is integrated with the required runtime libraries. Managing this build process is key to securing the software stack. 2.8.1. Building once, deploying everywhere Using OpenShift Container Platform as the standard platform for container builds enables you to guarantee the security of the build environment. Adhering to a "build once, deploy everywhere" philosophy ensures that the product of the build process is exactly what is deployed in production. It is also important to maintain the immutability of your containers. You should not patch running containers, but rebuild and redeploy them. As your software moves through the stages of building, testing, and production, it is important that the tools making up your software supply chain be trusted. The following figure illustrates the process and tools that could be incorporated into a trusted software supply chain for containerized software: OpenShift Container Platform can be integrated with trusted code repositories (such as GitHub) and development platforms (such as Che) for creating and managing secure code. Unit testing could rely on Cucumber and JUnit . You could inspect your containers for vulnerabilities and compliance issues with Anchore or Twistlock, and use image scanning tools such as AtomicScan or Clair. Tools such as Sysdig could provide ongoing monitoring of your containerized applications. 2.8.2. Managing builds You can use Source-to-Image (S2I) to combine source code and base images. Builder images make use of S2I to enable your development and operations teams to collaborate on a reproducible build environment. With Red Hat S2I images available as Universal Base Image (UBI) images, you can now freely redistribute your software with base images built from real RHEL RPM packages. Red Hat has removed subscription restrictions to allow this. When developers commit code with Git for an application using build images, OpenShift Container Platform can perform the following functions: Trigger, either by using webhooks on the code repository or other automated continuous integration (CI) process, to automatically assemble a new image from available artifacts, the S2I builder image, and the newly committed code. Automatically deploy the newly built image for testing. Promote the tested image to production where it can be automatically deployed using a CI process. You can use the integrated OpenShift Container Registry to manage access to final images. Both S2I and native build images are automatically pushed to your OpenShift Container Registry. In addition to the included Jenkins for CI, you can also integrate your own build and CI environment with OpenShift Container Platform using RESTful APIs, as well as use any API-compliant image registry. 2.8.3. Securing inputs during builds In some scenarios, build operations require credentials to access dependent resources, but it is undesirable for those credentials to be available in the final application image produced by the build. You can define input secrets for this purpose. For example, when building a Node.js application, you can set up your private mirror for Node.js modules. To download modules from that private mirror, you must supply a custom .npmrc file for the build that contains a URL, user name, and password. For security reasons, you do not want to expose your credentials in the application image. Using this example scenario, you can add an input secret to a new BuildConfig object: Create the secret, if it does not exist: USD oc create secret generic secret-npmrc --from-file=.npmrc=~/.npmrc This creates a new secret named secret-npmrc , which contains the base64 encoded content of the ~/.npmrc file. Add the secret to the source section in the existing BuildConfig object: source: git: uri: https://github.com/sclorg/nodejs-ex.git secrets: - destinationDir: . secret: name: secret-npmrc To include the secret in a new BuildConfig object, run the following command: USD oc new-build \ openshift/nodejs-010-centos7~https://github.com/sclorg/nodejs-ex.git \ --build-secret secret-npmrc 2.8.4. Designing your build process You can design your container image management and build process to use container layers so that you can separate control. For example, an operations team manages base images, while architects manage middleware, runtimes, databases, and other solutions. Developers can then focus on application layers and focus on writing code. Because new vulnerabilities are identified daily, you need to proactively check container content over time. To do this, you should integrate automated security testing into your build or CI process. For example: SAST / DAST - Static and Dynamic security testing tools. Scanners for real-time checking against known vulnerabilities. Tools like these catalog the open source packages in your container, notify you of any known vulnerabilities, and update you when new vulnerabilities are discovered in previously scanned packages. Your CI process should include policies that flag builds with issues discovered by security scans so that your team can take appropriate action to address those issues. You should sign your custom built containers to ensure that nothing is tampered with between build and deployment. Using GitOps methodology, you can use the same CI/CD mechanisms to manage not only your application configurations, but also your OpenShift Container Platform infrastructure. 2.8.5. Building Knative serverless applications Relying on Kubernetes and Kourier, you can build, deploy, and manage serverless applications by using OpenShift Serverless in OpenShift Container Platform. As with other builds, you can use S2I images to build your containers, then serve them using Knative services. View Knative application builds through the Topology view of the OpenShift Container Platform web console. 2.8.6. Additional resources Understanding image builds Triggering and modifying builds Creating build inputs Input secrets and config maps OpenShift Serverless overview Viewing application composition using the Topology view 2.9. Deploying containers You can use a variety of techniques to make sure that the containers you deploy hold the latest production-quality content and that they have not been tampered with. These techniques include setting up build triggers to incorporate the latest code and using signatures to ensure that the container comes from a trusted source and has not been modified. 2.9.1. Controlling container deployments with triggers If something happens during the build process, or if a vulnerability is discovered after an image has been deployed, you can use tooling for automated, policy-based deployment to remediate. You can use triggers to rebuild and replace images, ensuring the immutable containers process, instead of patching running containers, which is not recommended. For example, you build an application using three container image layers: core, middleware, and applications. An issue is discovered in the core image and that image is rebuilt. After the build is complete, the image is pushed to your OpenShift Container Registry. OpenShift Container Platform detects that the image has changed and automatically rebuilds and deploys the application image, based on the defined triggers. This change incorporates the fixed libraries and ensures that the production code is identical to the most current image. You can use the oc set triggers command to set a deployment trigger. For example, to set a trigger for a deployment called deployment-example: USD oc set triggers deploy/deployment-example \ --from-image=example:latest \ --containers=web 2.9.2. Controlling what image sources can be deployed It is important that the intended images are actually being deployed, that the images including the contained content are from trusted sources, and they have not been altered. Cryptographic signing provides this assurance. OpenShift Container Platform enables cluster administrators to apply security policy that is broad or narrow, reflecting deployment environment and security requirements. Two parameters define this policy: one or more registries, with optional project namespace trust type, such as accept, reject, or require public key(s) You can use these policy parameters to allow, deny, or require a trust relationship for entire registries, parts of registries, or individual images. Using trusted public keys, you can ensure that the source is cryptographically verified. The policy rules apply to nodes. Policy may be applied uniformly across all nodes or targeted for different node workloads (for example, build, zone, or environment). Example image signature policy file The policy can be saved onto a node as /etc/containers/policy.json . Saving this file to a node is best accomplished using a new MachineConfig object. This example enforces the following rules: Require images from the Red Hat Registry ( registry.access.redhat.com ) to be signed by the Red Hat public key. Require images from your OpenShift Container Registry in the openshift namespace to be signed by the Red Hat public key. Require images from your OpenShift Container Registry in the production namespace to be signed by the public key for example.com . Reject all other registries not specified by the global default definition. 2.9.3. Using signature transports A signature transport is a way to store and retrieve the binary signature blob. There are two types of signature transports. atomic : Managed by the OpenShift Container Platform API. docker : Served as a local file or by a web server. The OpenShift Container Platform API manages signatures that use the atomic transport type. You must store the images that use this signature type in your OpenShift Container Registry. Because the docker/distribution extensions API auto-discovers the image signature endpoint, no additional configuration is required. Signatures that use the docker transport type are served by local file or web server. These signatures are more flexible; you can serve images from any container image registry and use an independent server to deliver binary signatures. However, the docker transport type requires additional configuration. You must configure the nodes with the URI of the signature server by placing arbitrarily-named YAML files into a directory on the host system, /etc/containers/registries.d by default. The YAML configuration files contain a registry URI and a signature server URI, or sigstore : Example registries.d file docker: access.redhat.com: sigstore: https://access.redhat.com/webassets/docker/content/sigstore In this example, the Red Hat Registry, access.redhat.com , is the signature server that provides signatures for the docker transport type. Its URI is defined in the sigstore parameter. You might name this file /etc/containers/registries.d/redhat.com.yaml and use the Machine Config Operator to automatically place the file on each node in your cluster. No service restart is required since policy and registries.d files are dynamically loaded by the container runtime. 2.9.4. Creating secrets and config maps The Secret object type provides a mechanism to hold sensitive information such as passwords, OpenShift Container Platform client configuration files, dockercfg files, and private source repository credentials. Secrets decouple sensitive content from pods. You can mount secrets into containers using a volume plugin or the system can use secrets to perform actions on behalf of a pod. For example, to add a secret to your deployment configuration so that it can access a private image repository, do the following: Procedure Log in to the OpenShift Container Platform web console. Create a new project. Navigate to Resources Secrets and create a new secret. Set Secret Type to Image Secret and Authentication Type to Image Registry Credentials to enter credentials for accessing a private image repository. When creating a deployment configuration (for example, from the Add to Project Deploy Image page), set the Pull Secret to your new secret. Config maps are similar to secrets, but are designed to support working with strings that do not contain sensitive information. The ConfigMap object holds key-value pairs of configuration data that can be consumed in pods or used to store configuration data for system components such as controllers. 2.9.5. Automating continuous deployment You can integrate your own continuous deployment (CD) tooling with OpenShift Container Platform. By leveraging CI/CD and OpenShift Container Platform, you can automate the process of rebuilding the application to incorporate the latest fixes, testing, and ensuring that it is deployed everywhere within the environment. Additional resources Input secrets and config maps 2.10. Securing the container platform OpenShift Container Platform and Kubernetes APIs are key to automating container management at scale. APIs are used to: Validate and configure the data for pods, services, and replication controllers. Perform project validation on incoming requests and invoke triggers on other major system components. Security-related features in OpenShift Container Platform that are based on Kubernetes include: Multitenancy, which combines Role-Based Access Controls and network policies to isolate containers at multiple levels. Admission plugins, which form boundaries between an API and those making requests to the API. OpenShift Container Platform uses Operators to automate and simplify the management of Kubernetes-level security features. 2.10.1. Isolating containers with multitenancy Multitenancy allows applications on an OpenShift Container Platform cluster that are owned by multiple users, and run across multiple hosts and namespaces, to remain isolated from each other and from outside attacks. You obtain multitenancy by applying role-based access control (RBAC) to Kubernetes namespaces. In Kubernetes, namespaces are areas where applications can run in ways that are separate from other applications. OpenShift Container Platform uses and extends namespaces by adding extra annotations, including MCS labeling in SELinux, and identifying these extended namespaces as projects . Within the scope of a project, users can maintain their own cluster resources, including service accounts, policies, constraints, and various other objects. RBAC objects are assigned to projects to authorize selected users to have access to those projects. That authorization takes the form of rules, roles, and bindings: Rules define what a user can create or access in a project. Roles are collections of rules that you can bind to selected users or groups. Bindings define the association between users or groups and roles. Local RBAC roles and bindings attach a user or group to a particular project. Cluster RBAC can attach cluster-wide roles and bindings to all projects in a cluster. There are default cluster roles that can be assigned to provide admin , basic-user , cluster-admin , and cluster-status access. 2.10.2. Protecting control plane with admission plugins While RBAC controls access rules between users and groups and available projects, admission plugins define access to the OpenShift Container Platform master API. Admission plugins form a chain of rules that consist of: Default admissions plugins: These implement a default set of policies and resources limits that are applied to components of the OpenShift Container Platform control plane. Mutating admission plugins: These plugins dynamically extend the admission chain. They call out to a webhook server and can both authenticate a request and modify the selected resource. Validating admission plugins: These validate requests for a selected resource and can both validate the request and ensure that the resource does not change again. API requests go through admissions plugins in a chain, with any failure along the way causing the request to be rejected. Each admission plugin is associated with particular resources and only responds to requests for those resources. 2.10.2.1. Security context constraints (SCCs) You can use security context constraints (SCCs) to define a set of conditions that a pod must run with to be accepted into the system. Some aspects that can be managed by SCCs include: Running of privileged containers Capabilities a container can request to be added Use of host directories as volumes SELinux context of the container Container user ID If you have the required permissions, you can adjust the default SCC policies to be more permissive, if required. 2.10.2.2. Granting roles to service accounts You can assign roles to service accounts, in the same way that users are assigned role-based access. There are three default service accounts created for each project. A service account: is limited in scope to a particular project derives its name from its project is automatically assigned an API token and credentials to access the OpenShift Container Registry Service accounts associated with platform components automatically have their keys rotated. 2.10.3. Authentication and authorization 2.10.3.1. Controlling access using OAuth You can use API access control via authentication and authorization for securing your container platform. The OpenShift Container Platform master includes a built-in OAuth server. Users can obtain OAuth access tokens to authenticate themselves to the API. As an administrator, you can configure OAuth to authenticate using an identity provider , such as LDAP, GitHub, or Google. The identity provider is used by default for new OpenShift Container Platform deployments, but you can configure this at initial installation time or post-installation. 2.10.3.2. API access control and management Applications can have multiple, independent API services which have different endpoints that require management. OpenShift Container Platform includes a containerized version of the 3scale API gateway so that you can manage your APIs and control access. 3scale gives you a variety of standard options for API authentication and security, which can be used alone or in combination to issue credentials and control access: standard API keys, application ID and key pair, and OAuth 2.0. You can restrict access to specific endpoints, methods, and services and apply access policy for groups of users. Application plans allow you to set rate limits for API usage and control traffic flow for groups of developers. For a tutorial on using APIcast v2, the containerized 3scale API Gateway, see Running APIcast on Red Hat OpenShift in the 3scale documentation. 2.10.3.3. Red Hat Single Sign-On The Red Hat Single Sign-On server enables you to secure your applications by providing web single sign-on capabilities based on standards, including SAML 2.0, OpenID Connect, and OAuth 2.0. The server can act as a SAML or OpenID Connect-based identity provider (IdP), mediating with your enterprise user directory or third-party identity provider for identity information and your applications using standards-based tokens. You can integrate Red Hat Single Sign-On with LDAP-based directory services including Microsoft Active Directory and Red Hat Enterprise Linux Identity Management. 2.10.3.4. Secure self-service web console OpenShift Container Platform provides a self-service web console to ensure that teams do not access other environments without authorization. OpenShift Container Platform ensures a secure multitenant master by providing the following: Access to the master uses Transport Layer Security (TLS) Access to the API Server uses X.509 certificates or OAuth access tokens Project quota limits the damage that a rogue token could do The etcd service is not exposed directly to the cluster 2.10.4. Managing certificates for the platform OpenShift Container Platform has multiple components within its framework that use REST-based HTTPS communication leveraging encryption via TLS certificates. OpenShift Container Platform's installer configures these certificates during installation. There are some primary components that generate this traffic: masters (API server and controllers) etcd nodes registry router 2.10.4.1. Configuring custom certificates You can configure custom serving certificates for the public hostnames of the API server and web console during initial installation or when redeploying certificates. You can also use a custom CA. Additional resources Introduction to OpenShift Container Platform Using RBAC to define and apply permissions About admission plugins Managing security context constraints SCC reference commands Examples of granting roles to service accounts Configuring the internal OAuth server Understanding identity provider configuration Certificate types and descriptions Proxy certificates 2.11. Securing networks Network security can be managed at several levels. At the pod level, network namespaces can prevent containers from seeing other pods or the host system by restricting network access. Network policies give you control over allowing and rejecting connections. You can manage ingress and egress traffic to and from your containerized applications. 2.11.1. Using network namespaces OpenShift Container Platform uses software-defined networking (SDN) to provide a unified cluster network that enables communication between containers across the cluster. Network policy mode, by default, makes all pods in a project accessible from other pods and network endpoints. To isolate one or more pods in a project, you can create NetworkPolicy objects in that project to indicate the allowed incoming connections. Using multitenant mode, you can provide project-level isolation for pods and services. 2.11.2. Isolating pods with network policies Using network policies , you can isolate pods from each other in the same project. Network policies can deny all network access to a pod, only allow connections for the Ingress Controller, reject connections from pods in other projects, or set similar rules for how networks behave. Additional resources About network policy 2.11.3. Using multiple pod networks Each running container has only one network interface by default. The Multus CNI plugin lets you create multiple CNI networks, and then attach any of those networks to your pods. In that way, you can do things like separate private data onto a more restricted network and have multiple network interfaces on each node. Additional resources Using multiple networks 2.11.4. Isolating applications OpenShift Container Platform enables you to segment network traffic on a single cluster to make multitenant clusters that isolate users, teams, applications, and environments from non-global resources. Additional resources Configuring network isolation using OpenShiftSDN 2.11.5. Securing ingress traffic There are many security implications related to how you configure access to your Kubernetes services from outside of your OpenShift Container Platform cluster. Besides exposing HTTP and HTTPS routes, ingress routing allows you to set up NodePort or LoadBalancer ingress types. NodePort exposes an application's service API object from each cluster worker. LoadBalancer lets you assign an external load balancer to an associated service API object in your OpenShift Container Platform cluster. Additional resources Configuring ingress cluster traffic 2.11.6. Securing egress traffic OpenShift Container Platform provides the ability to control egress traffic using either a router or firewall method. For example, you can use IP whitelisting to control database access. A cluster administrator can assign one or more egress IP addresses to a project in an OpenShift Container Platform SDN network provider. Likewise, a cluster administrator can prevent egress traffic from going outside of an OpenShift Container Platform cluster using an egress firewall. By assigning a fixed egress IP address, you can have all outgoing traffic assigned to that IP address for a particular project. With the egress firewall, you can prevent a pod from connecting to an external network, prevent a pod from connecting to an internal network, or limit a pod's access to specific internal subnets. Additional resources Configuring an egress firewall to control access to external IP addresses Configuring egress IPs for a project 2.12. Securing attached storage OpenShift Container Platform supports multiple types of storage, both for on-premise and cloud providers. In particular, OpenShift Container Platform can use storage types that support the Container Storage Interface. 2.12.1. Persistent volume plugins Containers are useful for both stateless and stateful applications. Protecting attached storage is a key element of securing stateful services. Using the Container Storage Interface (CSI), OpenShift Container Platform can incorporate storage from any storage back end that supports the CSI interface. OpenShift Container Platform provides plugins for multiple types of storage, including: Red Hat OpenShift Data Foundation * AWS Elastic Block Stores (EBS) * AWS Elastic File System (EFS) * Azure Disk * Azure File * OpenStack Cinder * GCE Persistent Disks * VMware vSphere * Network File System (NFS) FlexVolume Fibre Channel iSCSI Plugins for those storage types with dynamic provisioning are marked with an asterisk (*). Data in transit is encrypted via HTTPS for all OpenShift Container Platform components communicating with each other. You can mount a persistent volume (PV) on a host in any way supported by your storage type. Different types of storage have different capabilities and each PV's access modes are set to the specific modes supported by that particular volume. For example, NFS can support multiple read/write clients, but a specific NFS PV might be exported on the server as read-only. Each PV has its own set of access modes describing that specific PV's capabilities, such as ReadWriteOnce , ReadOnlyMany , and ReadWriteMany . 2.12.2. Shared storage For shared storage providers like NFS, the PV registers its group ID (GID) as an annotation on the PV resource. Then, when the PV is claimed by the pod, the annotated GID is added to the supplemental groups of the pod, giving that pod access to the contents of the shared storage. 2.12.3. Block storage For block storage providers like AWS Elastic Block Store (EBS), GCE Persistent Disks, and iSCSI, OpenShift Container Platform uses SELinux capabilities to secure the root of the mounted volume for non-privileged pods, making the mounted volume owned by and only visible to the container with which it is associated. Additional resources Understanding persistent storage Configuring CSI volumes Dynamic provisioning Persistent storage using NFS Persistent storage using AWS Elastic Block Store Persistent storage using GCE Persistent Disk 2.13. Monitoring cluster events and logs The ability to monitor and audit an OpenShift Container Platform cluster is an important part of safeguarding the cluster and its users against inappropriate usage. There are two main sources of cluster-level information that are useful for this purpose: events and logging. 2.13.1. Watching cluster events Cluster administrators are encouraged to familiarize themselves with the Event resource type and review the list of system events to determine which events are of interest. Events are associated with a namespace, either the namespace of the resource they are related to or, for cluster events, the default namespace. The default namespace holds relevant events for monitoring or auditing a cluster, such as node events and resource events related to infrastructure components. The master API and oc command do not provide parameters to scope a listing of events to only those related to nodes. A simple approach would be to use grep : USD oc get event -n default | grep Node Example output 1h 20h 3 origin-node-1.example.local Node Normal NodeHasDiskPressure ... A more flexible approach is to output the events in a form that other tools can process. For example, the following example uses the jq tool against JSON output to extract only NodeHasDiskPressure events: USD oc get events -n default -o json \ | jq '.items[] | select(.involvedObject.kind == "Node" and .reason == "NodeHasDiskPressure")' Example output { "apiVersion": "v1", "count": 3, "involvedObject": { "kind": "Node", "name": "origin-node-1.example.local", "uid": "origin-node-1.example.local" }, "kind": "Event", "reason": "NodeHasDiskPressure", ... } Events related to resource creation, modification, or deletion can also be good candidates for detecting misuse of the cluster. The following query, for example, can be used to look for excessive pulling of images: USD oc get events --all-namespaces -o json \ | jq '[.items[] | select(.involvedObject.kind == "Pod" and .reason == "Pulling")] | length' Example output 4 Note When a namespace is deleted, its events are deleted as well. Events can also expire and are deleted to prevent filling up etcd storage. Events are not stored as a permanent record and frequent polling is necessary to capture statistics over time. 2.13.2. Logging Using the oc log command, you can view container logs, build configs and deployments in real time. Different can users have access different access to logs: Users who have access to a project are able to see the logs for that project by default. Users with admin roles can access all container logs. To save your logs for further audit and analysis, you can enable the cluster-logging add-on feature to collect, manage, and view system, container, and audit logs. You can deploy, manage, and upgrade OpenShift Logging through the OpenShift Elasticsearch Operator and Red Hat OpenShift Logging Operator. 2.13.3. Audit logs With audit logs , you can follow a sequence of activities associated with how a user, administrator, or other OpenShift Container Platform component is behaving. API audit logging is done on each server. Additional resources List of system events Understanding OpenShift Logging Viewing audit logs
[ "variant: openshift version: 4.10.0 metadata: name: 51-worker-rh-registry-trust labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/containers/policy.json mode: 0644 overwrite: true contents: inline: | { \"default\": [ { \"type\": \"insecureAcceptAnything\" } ], \"transports\": { \"docker\": { \"registry.access.redhat.com\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release\" } ], \"registry.redhat.io\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release\" } ] }, \"docker-daemon\": { \"\": [ { \"type\": \"insecureAcceptAnything\" } ] } } }", "butane 51-worker-rh-registry-trust.bu -o 51-worker-rh-registry-trust.yaml", "oc apply -f 51-worker-rh-registry-trust.yaml", "oc get mc", "NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 00-worker a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-master-container-runtime a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-master-kubelet a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-worker-container-runtime a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-worker-kubelet a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 51-master-rh-registry-trust 3.2.0 13s 51-worker-rh-registry-trust 3.2.0 53s 1 99-master-generated-crio-seccomp-use-default 3.2.0 25m 99-master-generated-registries a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 99-master-ssh 3.2.0 28m 99-worker-generated-crio-seccomp-use-default 3.2.0 25m 99-worker-generated-registries a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 99-worker-ssh 3.2.0 28m rendered-master-af1e7ff78da0a9c851bab4be2777773b a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 8s rendered-master-cd51fd0c47e91812bfef2765c52ec7e6 a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 24m rendered-worker-2b52f75684fbc711bd1652dd86fd0b82 a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 24m rendered-worker-be3b3bce4f4aa52a62902304bac9da3c a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 48s 2", "oc get mcp", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-af1e7ff78da0a9c851bab4be2777773b True False False 3 3 3 0 30m worker rendered-worker-be3b3bce4f4aa52a62902304bac9da3c False True False 3 0 0 0 30m 1", "oc debug node/<node_name>", "sh-4.2# chroot /host", "docker: registry.redhat.io: sigstore: https://registry.redhat.io/containers/sigstore", "docker: registry.access.redhat.com: sigstore: https://access.redhat.com/webassets/docker/content/sigstore", "oc describe machineconfigpool/worker", "Name: worker Namespace: Labels: machineconfiguration.openshift.io/mco-built-in= Annotations: <none> API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfigPool Metadata: Creation Timestamp: 2019-12-19T02:02:12Z Generation: 3 Resource Version: 16229 Self Link: /apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker UID: 92697796-2203-11ea-b48c-fa163e3940e5 Spec: Configuration: Name: rendered-worker-f6819366eb455a401c42f8d96ab25c02 Source: API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 00-worker API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-container-runtime API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-kubelet API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 51-worker-rh-registry-trust API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-92697796-2203-11ea-b48c-fa163e3940e5-registries API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-ssh Machine Config Selector: Match Labels: machineconfiguration.openshift.io/role: worker Node Selector: Match Labels: node-role.kubernetes.io/worker: Paused: false Status: Conditions: Last Transition Time: 2019-12-19T02:03:27Z Message: Reason: Status: False Type: RenderDegraded Last Transition Time: 2019-12-19T02:03:43Z Message: Reason: Status: False Type: NodeDegraded Last Transition Time: 2019-12-19T02:03:43Z Message: Reason: Status: False Type: Degraded Last Transition Time: 2019-12-19T02:28:23Z Message: Reason: Status: False Type: Updated Last Transition Time: 2019-12-19T02:28:23Z Message: All nodes are updating to rendered-worker-f6819366eb455a401c42f8d96ab25c02 Reason: Status: True Type: Updating Configuration: Name: rendered-worker-d9b3f4ffcfd65c30dcf591a0e8cf9b2e Source: API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 00-worker API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-container-runtime API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-kubelet API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-92697796-2203-11ea-b48c-fa163e3940e5-registries API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-ssh Degraded Machine Count: 0 Machine Count: 1 Observed Generation: 3 Ready Machine Count: 0 Unavailable Machine Count: 1 Updated Machine Count: 0 Events: <none>", "oc describe machineconfigpool/worker", "Last Transition Time: 2019-12-19T04:53:09Z Message: All nodes are updated with rendered-worker-f6819366eb455a401c42f8d96ab25c02 Reason: Status: True Type: Updated Last Transition Time: 2019-12-19T04:53:09Z Message: Reason: Status: False Type: Updating Configuration: Name: rendered-worker-f6819366eb455a401c42f8d96ab25c02 Source: API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 00-worker API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-container-runtime API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-kubelet API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 51-worker-rh-registry-trust API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-92697796-2203-11ea-b48c-fa163e3940e5-registries API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-ssh Degraded Machine Count: 0 Machine Count: 3 Observed Generation: 4 Ready Machine Count: 3 Unavailable Machine Count: 0 Updated Machine Count: 3", "oc debug node/<node> -- chroot /host cat /etc/containers/policy.json", "Starting pod/<node>-debug To use host binaries, run `chroot /host` { \"default\": [ { \"type\": \"insecureAcceptAnything\" } ], \"transports\": { \"docker\": { \"registry.access.redhat.com\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release\" } ], \"registry.redhat.io\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release\" } ] }, \"docker-daemon\": { \"\": [ { \"type\": \"insecureAcceptAnything\" } ] } } }", "oc debug node/<node> -- chroot /host cat /etc/containers/registries.d/registry.redhat.io.yaml", "Starting pod/<node>-debug To use host binaries, run `chroot /host` docker: registry.redhat.io: sigstore: https://registry.redhat.io/containers/sigstore", "oc debug node/<node> -- chroot /host cat /etc/containers/registries.d/registry.access.redhat.com.yaml", "Starting pod/<node>-debug To use host binaries, run `chroot /host` docker: registry.access.redhat.com: sigstore: https://access.redhat.com/webassets/docker/content/sigstore", "quality.images.openshift.io/<qualityType>.<providerId>: {}", "quality.images.openshift.io/vulnerability.blackduck: {} quality.images.openshift.io/vulnerability.jfrog: {} quality.images.openshift.io/license.blackduck: {} quality.images.openshift.io/vulnerability.openscap: {}", "{ \"name\": \"OpenSCAP\", \"description\": \"OpenSCAP vulnerability score\", \"timestamp\": \"2016-09-08T05:04:46Z\", \"reference\": \"https://www.open-scap.org/930492\", \"compliant\": true, \"scannerVersion\": \"1.2\", \"summary\": [ { \"label\": \"critical\", \"data\": \"4\", \"severityIndex\": 3, \"reference\": null }, { \"label\": \"important\", \"data\": \"12\", \"severityIndex\": 2, \"reference\": null }, { \"label\": \"moderate\", \"data\": \"8\", \"severityIndex\": 1, \"reference\": null }, { \"label\": \"low\", \"data\": \"26\", \"severityIndex\": 0, \"reference\": null } ] }", "{ \"name\": \"Red Hat Ecosystem Catalog\", \"description\": \"Container health index\", \"timestamp\": \"2016-09-08T05:04:46Z\", \"reference\": \"https://access.redhat.com/errata/RHBA-2016:1566\", \"compliant\": null, \"scannerVersion\": \"1.2\", \"summary\": [ { \"label\": \"Health index\", \"data\": \"B\", \"severityIndex\": 1, \"reference\": null } ] }", "oc annotate image <image> quality.images.openshift.io/vulnerability.redhatcatalog='{ \"name\": \"Red Hat Ecosystem Catalog\", \"description\": \"Container health index\", \"timestamp\": \"2020-06-01T05:04:46Z\", \"compliant\": null, \"scannerVersion\": \"1.2\", \"reference\": \"https://access.redhat.com/errata/RHBA-2020:2347\", \"summary\": \"[ { \"label\": \"Health index\", \"data\": \"B\", \"severityIndex\": 1, \"reference\": null } ]\" }'", "annotations: images.openshift.io/deny-execution: true", "curl -X PATCH -H \"Authorization: Bearer <token>\" -H \"Content-Type: application/merge-patch+json\" https://<openshift_server>:6443/apis/image.openshift.io/v1/images/<image_id> --data '{ <image_annotation> }'", "{ \"metadata\": { \"annotations\": { \"quality.images.openshift.io/vulnerability.redhatcatalog\": \"{ 'name': 'Red Hat Ecosystem Catalog', 'description': 'Container health index', 'timestamp': '2020-06-01T05:04:46Z', 'compliant': null, 'reference': 'https://access.redhat.com/errata/RHBA-2020:2347', 'summary': [{'label': 'Health index', 'data': '4', 'severityIndex': 1, 'reference': null}] }\" } } }", "oc create secret generic secret-npmrc --from-file=.npmrc=~/.npmrc", "source: git: uri: https://github.com/sclorg/nodejs-ex.git secrets: - destinationDir: . secret: name: secret-npmrc", "oc new-build openshift/nodejs-010-centos7~https://github.com/sclorg/nodejs-ex.git --build-secret secret-npmrc", "oc set triggers deploy/deployment-example --from-image=example:latest --containers=web", "{ \"default\": [{\"type\": \"reject\"}], \"transports\": { \"docker\": { \"access.redhat.com\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release\" } ] }, \"atomic\": { \"172.30.1.1:5000/openshift\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release\" } ], \"172.30.1.1:5000/production\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/example.com/pubkey\" } ], \"172.30.1.1:5000\": [{\"type\": \"reject\"}] } } }", "docker: access.redhat.com: sigstore: https://access.redhat.com/webassets/docker/content/sigstore", "oc get event -n default | grep Node", "1h 20h 3 origin-node-1.example.local Node Normal NodeHasDiskPressure", "oc get events -n default -o json | jq '.items[] | select(.involvedObject.kind == \"Node\" and .reason == \"NodeHasDiskPressure\")'", "{ \"apiVersion\": \"v1\", \"count\": 3, \"involvedObject\": { \"kind\": \"Node\", \"name\": \"origin-node-1.example.local\", \"uid\": \"origin-node-1.example.local\" }, \"kind\": \"Event\", \"reason\": \"NodeHasDiskPressure\", }", "oc get events --all-namespaces -o json | jq '[.items[] | select(.involvedObject.kind == \"Pod\" and .reason == \"Pulling\")] | length'", "4" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/security_and_compliance/container-security-1
Preface
Preface Red Hat OpenShift Data Foundation supports deployment on existing Red Hat OpenShift Container Platform (RHOCP) using Red Hat OpenStack Platform clusters. Note Both internal and external OpenShift Data Foundation clusters are supported on Red Hat OpenStack Platform. See Planning your deployment for more information about deployment requirements. To deploy OpenShift Data Foundation, start with the requirements in Preparing to deploy OpenShift Data Foundation chapter and then follow the appropriate deployment process based on your requirement: Internal mode Deploying OpenShift Data Foundation on Red Hat OpenStack Platform in internal mode Deploy standalone Multicloud Object Gateway component External mode Deploying OpenShift Data Foundation on Red Hat OpenStack Platform in external mode
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/preface-ocs-osp
Chapter 4. Configuring Red Hat High Availability clusters on AWS
Chapter 4. Configuring Red Hat High Availability clusters on AWS This chapter includes information and procedures for configuring a Red Hat High Availability (HA) cluster on Amazon Web Services (AWS) using EC2 instances as cluster nodes. You have a number of options for obtaining the Red Hat Enterprise Linux (RHEL) images you use for your cluster. For information on image options for AWS, see Red Hat Enterprise Linux Image Options on AWS . This chapter includes prerequisite procedures for setting up your environment for AWS. Once you have set up your environment, you can create and configure EC2 instances. This chapter also includes procedures specific to the creation of HA clusters, which transform individual nodes into a cluster of HA nodes on AWS. These include procedures for installing the High Availability packages and agents on each cluster node, configuring fencing, and installing AWS network resource agents. This chapter refers to the Amazon documentation in a number of places. For many procedures, see the referenced Amazon documentation for more information. Prerequisites You need to install the AWS command line interface (CLI). For more information on installing AWS CLI, see Installing the AWS CLI . Enable your subscriptions in the Red Hat Cloud Access program . The Red Hat Cloud Access program allows you to move your Red Hat subscriptions from physical or on-premise systems onto AWS with full support from Red Hat. Additional resources Red Hat Cloud Access Reference Guide Red Hat in the Public Cloud Red Hat Enterprise Linux on Amazon EC2 - FAQs Setting up with Amazon EC2 Red Hat on Amazon Web Services Support Policies for RHEL High Availability Clusters 4.1. Creating the AWS Access Key and AWS Secret Access Key You need to create an AWS Access Key and AWS Secret Access Key before you install the AWS CLI. The fencing and resource agent APIs use the AWS Access Key and Secret Access Key to connect to each node in the cluster. Complete the following steps to create these keys. Prerequisites Your IAM user account must have Programmatic access. See Setting up the AWS Environment for more information. Procedure Launch the AWS Console . Click on your AWS Account ID to display the drop-down menu and select My Security Credentials . Click Users . Select the user to open the Summary screen. Click the Security credentials tab. Click Create access key . Download the .csv file (or save both keys). You need to enter these keys when creating the fencing device. 4.2. Installing the HA packages and agents Complete the following steps on all nodes to install the HA packages and agents. Procedure Enter the following command to remove the AWS Red Hat Update Infrastructure (RHUI) client. Because you are going to use a Red Hat Cloud Access subscription, you should not use AWS RHUI in addition to your subscription. Register the VM with Red Hat. Disable all repositories. Enable the RHEL 7 Server and RHEL 7 Server HA repositories. Update all packages. Reboot if the kernel is updated. Install pcs, pacemaker, fence agent, and resource agent. The user hacluster was created during the pcs and pacemaker installation in the step. Create a password for hacluster on all cluster nodes. Use the same password for all nodes. Add the high availability service to the RHEL Firewall if firewalld.service is enabled. Start the pcs service and enable it to start on boot. Verification step Ensure the pcs service is running. 4.3. Creating a cluster Complete the following steps to create the cluster of nodes. Procedure On one of the nodes, enter the following command to authenticate the pcs user hacluster . Specify the name of each node in the cluster. Example: Create the cluster. Example: Verification steps Enable the cluster. Start the cluster. Example: 4.4. Creating a fencing device Complete the following steps to configure fencing. Procedure Enter the following AWS metadata query to get the Instance ID for each node. You need these IDs to configure the fence device. See Instance Metadata and User Data for additional information. Example: Create a fence device. Use the pcmk_host_map command to map the RHEL host name to the Instance ID. Use the AWS Access Key and AWS Secret Access Key you previously set up in Creating the AWS Access Key and AWS Secret Access Key . Example: Verification steps Test the fencing agent for one of the other nodes. Example: Check the status to verify that the node is fenced. Example: 4.5. Installing the AWS CLI on cluster nodes Previously, you installed the AWS CLI on your host system. You now need to install the AWS CLI on cluster nodes before you configure the network resource agents. Complete the following procedure on each cluster node. Prerequisites You must have created an AWS Access Key and AWS Secret Access Key. For more information, see Creating the AWS Access Key and AWS Secret Access Key . Procedure Perform the procedure Installing the AWS CLI . Enter the following command to verify that the AWS CLI is configured properly. The instance IDs and instance names should display. Example: 4.6. Installing network resource agents For HA operations to work, the cluster uses AWS networking resource agents to enable failover functionality. If a node does not respond to a heartbeat check in a set time, the node is fenced and operations fail over to an additional node in the cluster. Network resource agents need to be configured for this to work. Add the two resources to the same group to enforce order and colocation constraints. Create a secondary private IP resource and virtual IP resource Complete the following procedure to add a secondary private IP address and create a virtual IP. You can complete this procedure from any node in the cluster. Procedure Enter the following command to view the AWS Secondary Private IP Address resource agent (awsvip) description. This shows the options and default operations for this agent. Enter the following command to create the Secondary Private IP address using an unused private IP address in the VPC CIDR block. Example: Create a virtual IP resource. This is a VPC IP address that can be rapidly remapped from the fenced node to the failover node, masking the failure of the fenced node within the subnet. Example: Verification step Enter the pcs status command to verify that the resources are running. Example: Create an elastic IP address An elastic IP address is a public IP address that can be rapidly remapped from the fenced node to the failover node, masking the failure of the fenced node. Note that this is different from the virtual IP resource created earlier. The elastic IP address is used for public-facing Internet connections instead of subnet connections. Add the two resources to the same group that was previously created to enforce order and colocation constraints. Enter the following AWS CLI command to create an elastic IP address. Enter the following command to view the AWS Secondary Elastic IP Address resource agent (awseip) description. This shows the options and default operations for this agent. Create the Secondary Elastic IP address resource using the allocated IP address created in Step 1. Example: Verification step Enter the pcs status command to verify that the resource is running. Example: Test the elastic IP address Enter the following commands to verify the virtual IP (awsvip) and elastic IP (awseip) resources are working. Procedure Launch an SSH session from your local workstation to the elastic IP address previously created. Example: Verify that the host you connected to via SSH is the host associated with the elastic resource created. Additional resources High Availability Add-On Overview High Availability Add-On Administration High Availability Add-On Reference 4.7. Configuring shared block storage This section provides an optional procedure for configuring shared block storage for a Red Hat High Availability cluster with Amazon EBS Multi-Attach volumes. The procedure assumes three instances (a three-node cluster) with a 1TB shared disk. Procedure Create a shared block volume using the AWS command create-volume . For example, the following command creates a volume in the us-east-1a availability zone. Note You need the VolumeId in the step. For each instance in your cluster, attach a shared block volume using the AWS command attach-volume . Use your <instance_id> and <volume_id> . For example, the following command attaches a shared block volume vol-042a5652867304f09 to instance i-0eb803361c2c887f2 . Verification steps For each instance in your cluster, verify that the block device is available by using the SSH command with your instance <ip_address> . For example, the following command lists details including the host name and block device for the instance IP 198.51.100.3 . Use the ssh command to verify that each instance in your cluster uses the same shared disk. For example, the following command lists details including the host name and shared disk volume ID for the instance IP address 198.51.100.3 . After you have verified that the shared disk is attached to each instance, you can configure resilient storage for the cluster. For information on configuring resilient storage for a Red Hat High Availability cluster, see Configuring a GFS2 File System in a Cluster . For general information on GFS2 file systems, see Configuring and managing GFS2 file systems .
[ "sudo -i yum -y remove rh-amazon-rhui-client*", "subscription-manager register --auto-attach", "subscription-manager repos --disable=*", "subscription-manager repos --enable=rhel-7-server-rpms subscription-manager repos --enable=rhel-ha-for-rhel-7-server-rpms", "yum update -y", "reboot", "yum -y install pcs pacemaker fence-agents-aws resource-agents", "passwd hacluster", "firewall-cmd --permanent --add-service=high-availability firewall-cmd --reload", "systemctl enable pcsd.service --now", "systemctl is-active pcsd.service", "pcs host auth _hostname1_ _hostname2_ _hostname3_", "pcs host auth node01 node02 node03 Username: hacluster Password: node01: Authorized node02: Authorized node03: Authorized", "pcs cluster setup --name _hostname1_ _hostname2_ _hostname3_", "pcs cluster setup --name newcluster node01 node02 node03 ...omitted Synchronizing pcsd certificates on nodes node01, node02, node03 node02: Success node03: Success node01: Success Restarting pcsd on the nodes in order to reload the certificates node02: Success node03: Success node01: Success", "pcs cluster enable --all", "pcs cluster start --all", "pcs cluster enable --all node02: Cluster Enabled node03: Cluster Enabled node01: Cluster Enabled pcs cluster start --all node02: Starting Cluster node03: Starting Cluster node01: Starting Cluster", "echo USD(curl -s http://169.254.169.254/latest/meta-data/instance-id)", "echo USD(curl -s http://169.254.169.254/latest/meta-data/instance-id) i-07f1ac63af0ec0ac6", "pcs stonith create cluster_fence fence_aws access_key=access-key secret_key=_secret-access-key_ region=_region_ pcmk_host_map=\"rhel-hostname-1:Instance-ID-1;rhel-hostname-2:Instance-ID-2;rhel-hostname-3:Instance-ID-3\"", "pcs stonith create clusterfence fence_aws access_key=AKIAI*******6MRMJA secret_key=a75EYIG4RVL3h*******K7koQ8dzaDyn5yoIZ/ region=us-east-1 pcmk_host_map=\"ip-10-0-0-48:i-07f1ac63af0ec0ac6;ip-10-0-0-46:i-063fc5fe93b4167b2;ip-10-0-0-58:i-08bd39eb03a6fd2c7\" power_timeout=240 pcmk_reboot_timeout=480 pcmk_reboot_retries=4", "pcs stonith fence _awsnodename_", "pcs stonith fence ip-10-0-0-58 Node: ip-10-0-0-58 fenced", "watch pcs status", "pcs status Cluster name: newcluster Stack: corosync Current DC: ip-10-0-0-46 (version 1.1.18-11.el7-2b07d5c5a9) - partition with quorum Last updated: Fri Mar 2 20:01:31 2018 Last change: Fri Mar 2 19:24:59 2018 by root via cibadmin on ip-10-0-0-48 3 nodes configured 1 resource configured Online: [ ip-10-0-0-46 ip-10-0-0-48 ip-10-0-0-58 ] Full list of resources: clusterfence (stonith:fence_aws): Started ip-10-0-0-46 Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled", "aws ec2 describe-instances --output text --query 'Reservations[*].Instances[*].[InstanceId,Tags[?Key==`Name`].Value]' i-07f1ac63af0ec0ac6 ip-10-0-0-48 i-063fc5fe93b4167b2 ip-10-0-0-46 i-08bd39eb03a6fd2c7 ip-10-0-0-58", "pcs resource describe awsvip", "pcs resource create privip awsvip secondary_private_ip=_Unused-IP-Address_ --group _group-name_", "pcs resource create privip awsvip secondary_private_ip=10.0.0.68 --group networking-group", "pcs resource create vip IPaddr2 ip=_secondary-private-IP_ --group _group-name_", "root@ip-10-0-0-48 ~]# pcs resource create vip IPaddr2 ip=10.0.0.68 --group networking-group", "pcs status", "pcs status Cluster name: newcluster Stack: corosync Current DC: ip-10-0-0-46 (version 1.1.18-11.el7-2b07d5c5a9) - partition with quorum Last updated: Fri Mar 2 22:34:24 2018 Last change: Fri Mar 2 22:14:58 2018 by root via cibadmin on ip-10-0-0-46 3 nodes configured 3 resources configured Online: [ ip-10-0-0-46 ip-10-0-0-48 ip-10-0-0-58 ] Full list of resources: clusterfence (stonith:fence_aws): Started ip-10-0-0-46 Resource Group: networking-group privip (ocf::heartbeat:awsvip): Started ip-10-0-0-48 vip (ocf::heartbeat:IPaddr2): Started ip-10-0-0-58 Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled", "aws ec2 allocate-address --domain vpc --output text eipalloc-4c4a2c45 vpc 35.169.153.122", "pcs resource describe awseip", "pcs resource create elastic awseip elastic_ip=_Elastic-IP-Address_allocation_id=_Elastic-IP-Association-ID_ --group networking-group", "pcs resource create elastic awseip elastic_ip=35.169.153.122 allocation_id=eipalloc-4c4a2c45 --group networking-group", "pcs status", "pcs status Cluster name: newcluster Stack: corosync Current DC: ip-10-0-0-58 (version 1.1.18-11.el7-2b07d5c5a9) - partition with quorum Last updated: Mon Mar 5 16:27:55 2018 Last change: Mon Mar 5 15:57:51 2018 by root via cibadmin on ip-10-0-0-46 3 nodes configured 4 resources configured Online: [ ip-10-0-0-46 ip-10-0-0-48 ip-10-0-0-58 ] Full list of resources: clusterfence (stonith:fence_aws): Started ip-10-0-0-46 Resource Group: networking-group privip (ocf::heartbeat:awsvip): Started ip-10-0-0-48 vip (ocf::heartbeat:IPaddr2): Started ip-10-0-0-48 elastic (ocf::heartbeat:awseip): Started ip-10-0-0-48 Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled", "ssh -l ec2-user -i ~/.ssh/<KeyName>.pem elastic-IP", "ssh -l ec2-user -i ~/.ssh/cluster-admin.pem 35.169.153.122", "aws ec2 create-volume --availability-zone availability_zone --no-encrypted --size 1024 --volume-type io1 --iops 51200 --multi-attach-enabled", "aws ec2 create-volume --availability-zone us-east-1a --no-encrypted --size 1024 --volume-type io1 --iops 51200 --multi-attach-enabled { \"AvailabilityZone\": \"us-east-1a\", \"CreateTime\": \"2020-08-27T19:16:42.000Z\", \"Encrypted\": false, \"Size\": 1024, \"SnapshotId\": \"\", \"State\": \"creating\", \"VolumeId\": \"vol-042a5652867304f09\", \"Iops\": 51200, \"Tags\": [ ], \"VolumeType\": \"io1\" }", "aws ec2 attach-volume --device /dev/xvdd --instance-id instance_id --volume-id volume_id", "aws ec2 attach-volume --device /dev/xvdd --instance-id i-0eb803361c2c887f2 --volume-id vol-042a5652867304f09 { \"AttachTime\": \"2020-08-27T19:26:16.086Z\", \"Device\": \"/dev/xvdd\", \"InstanceId\": \"i-0eb803361c2c887f2\", \"State\": \"attaching\", \"VolumeId\": \"vol-042a5652867304f09\" }", "ssh <ip_address> \"hostname ; lsblk -d | grep ' 1T '\"", "ssh 198.51.100.3 \"hostname ; lsblk -d | grep ' 1T '\" nodea nvme2n1 259:1 0 1T 0 disk", "ssh ip_address \"hostname ; lsblk -d | grep ' 1T ' | awk '{print \\USD1}' | xargs -i udevadm info --query=all --name=/dev/{} | grep '^E: ID_SERIAL='\"", "ssh 198.51.100.3 \"hostname ; lsblk -d | grep ' 1T ' | awk '{print \\USD1}' | xargs -i udevadm info --query=all --name=/dev/{} | grep '^E: ID_SERIAL='\" nodea E: ID_SERIAL=Amazon Elastic Block Store_vol0fa5342e7aedf09f7" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/deploying_red_hat_enterprise_linux_7_on_public_cloud_platforms/configuring-a-red-hat-high-availability-cluster-on-aws_cloud-content
Chapter 2. Accessing Insights image builder on Red Hat Hybrid Cloud Console
Chapter 2. Accessing Insights image builder on Red Hat Hybrid Cloud Console 2.1. Getting access to Insights image builder on Red Hat Hybrid Cloud Console Follow the steps to access Insights image builder on Red Hat Hybrid Cloud Console . Prerequisites An account at Red Hat Customer Portal . A Red Hat Insights subscription for your account. Red Hat Insights is included with your Red Hat Enterprise Linux subscription. Procedure Access Insights image builder . Login with your Red Hat credentials. You are now able to create and monitor your composes. Additional resources Create a Red Hat account Product Documentation for Red Hat Insights Registration Assistant
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/creating_customized_images_by_using_insights_image_builder/getting-access-to-image-builder-on-cloud_creating-customized-rhel-images-using-the-image-builder-service
8.95. libreoffice
8.95. libreoffice 8.95.1. RHBA-2013:1594 - libreoffice bug fix and enhancement update Updated libreoffice packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. LibreOffice is an Open Source, community-developed, office productivity suite. It includes the key desktop applications, such as a word processor, spreadsheet, presentation manager, formula editor and drawing program. LibreOffice replaces OpenOffice.org and provides a similar but enhanced and extended Office Suite. Note The libreoffice package has been upgraded to upstream version 4.0.4, which provides a number of bug fixes and enhancements over the version. (BZ#919230) Bug Fixes BZ# 820554 The "--enable-new-dtags" flag was added to allow certain types of built time regression tests to function. As a consequence, the GCJ Java complier failed to search the correct location of Java libraries. This update applies a patch to remove the flag and GCJ works as expected. BZ#829709 Previously, the LibreOffice suite was not fully translated into certain local languages. This update provides the full translation of LibreOffice to local languages. BZ# 833512 During upgrading the OpenOffice.org suite to the OpenOffice suite, backward compatibility links were removed and the OpenOffice.org icons were not migrated to LibreOffice. Consequently, an attempt to launch LibreOffice failed with an error. With this update, the compatibility links have been restored and the icons now work as expected. BZ# 847519 Due to a bug in the chart creation code, an attempt to create a chart, under certain circumstances, failed with a segmentation fault. The underlying source code has been modified to fix this bug and the chart creation now works as expected. BZ#855972 Due to a bug in the underlying source code, an attempt to show the outline view in the Impress utility terminated unexpectedly. This update applies a patch to fix this bug and the outline view no longer crashes in the described scenario. BZ#863052 Certain versions of the Microsoft Office suite contain mismatching internal time stamp fields. Previously, the LibreOffice suite detected those fields and returned exceptions. Consequently, the user was not able to open certain Microsoft Office documents. With this update, LibreOffice has been modified to ignore the mismatching time stamp fields and Microsoft Office documents can be opened as expected. BZ#865058 When a large amount of user-defined number formats was specified in a file, those formats used all available slots in a table and for remaining formats the general format was used. As a consequence, certain cell formatting did not preserve during loading the file. With this update, a patch has been provided and cell formatting works as expected. BZ# 871462 The Libreoffice suite contains a number of harmless files used for testing purposes. Previously, on Microsoft Windows system, these files could trigger false positive alerts on various anti-virus software, such as Microsoft Security Essentials. For example, the alerts could be triggered when scanning the Red Hat Enterprise Linux 6 ISO file. The underlying source code has been modified to fix this bug and the files no longer trigger false positive alerts in the described scenario. BZ# 876742 Due to an insufficient implementation of tables, the Impress utility made an internal copy of a table during every operation. Consequently, when a presentation included large tables, the operations proceeded significantly slower. This update provides a patch to optimize the table content traversal. As a result, the operations proceed faster in the described scenario. BZ# 902694 Previously, the keyboard-shortcut mapping was preformed automatically. As a consequence, non-existing keys were suggested as shortcuts in certain languages. With this update, a patch has been provided to fix this bug and affected shortcuts are now mapped manually. Users of libreoffice are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/libreoffice
Chapter 5. Sending traces and metrics to the OpenTelemetry Collector
Chapter 5. Sending traces and metrics to the OpenTelemetry Collector You can set up and use the Red Hat build of OpenTelemetry to send traces to the OpenTelemetry Collector or the TempoStack instance. Sending traces and metrics to the OpenTelemetry Collector is possible with or without sidecar injection. 5.1. Sending traces and metrics to the OpenTelemetry Collector with sidecar injection You can set up sending telemetry data to an OpenTelemetry Collector instance with sidecar injection. The Red Hat build of OpenTelemetry Operator allows sidecar injection into deployment workloads and automatic configuration of your instrumentation to send telemetry data to the OpenTelemetry Collector. Prerequisites The Red Hat OpenShift distributed tracing platform (Tempo) is installed, and a TempoStack instance is deployed. You have access to the cluster through the web console or the OpenShift CLI ( oc ): You are logged in to the web console as a cluster administrator with the cluster-admin role. An active OpenShift CLI ( oc ) session by a cluster administrator with the cluster-admin role. For Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. Procedure Create a project for an OpenTelemetry Collector instance. apiVersion: project.openshift.io/v1 kind: Project metadata: name: observability Create a service account. apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-sidecar namespace: observability Grant the permissions to the service account for the k8sattributes and resourcedetection processors. apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: - apiGroups: ["", "config.openshift.io"] resources: ["pods", "namespaces", "infrastructures", "infrastructures/status"] verbs: ["get", "watch", "list"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: otel-collector-sidecar namespace: observability roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io Deploy the OpenTelemetry Collector as a sidecar. apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: observability spec: serviceAccount: otel-collector-sidecar mode: sidecar config: serviceAccount: otel-collector-sidecar receivers: otlp: protocols: grpc: {} http: {} processors: batch: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] timeout: 2s exporters: otlp: endpoint: "tempo-<example>-gateway:8090" 1 tls: insecure: true service: pipelines: traces: receivers: [otlp] processors: [memory_limiter, resourcedetection, batch] exporters: [otlp] 1 This points to the Gateway of the TempoStack instance deployed by using the <example> Tempo Operator. Create your deployment using the otel-collector-sidecar service account. Add the sidecar.opentelemetry.io/inject: "true" annotation to your Deployment object. This will inject all the needed environment variables to send data from your workloads to the OpenTelemetry Collector instance. 5.2. Sending traces and metrics to the OpenTelemetry Collector without sidecar injection You can set up sending telemetry data to an OpenTelemetry Collector instance without sidecar injection, which involves manually setting several environment variables. Prerequisites The Red Hat OpenShift distributed tracing platform (Tempo) is installed, and a TempoStack instance is deployed. You have access to the cluster through the web console or the OpenShift CLI ( oc ): You are logged in to the web console as a cluster administrator with the cluster-admin role. An active OpenShift CLI ( oc ) session by a cluster administrator with the cluster-admin role. For Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. Procedure Create a project for an OpenTelemetry Collector instance. apiVersion: project.openshift.io/v1 kind: Project metadata: name: observability Create a service account. apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment namespace: observability Grant the permissions to the service account for the k8sattributes and resourcedetection processors. apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: - apiGroups: ["", "config.openshift.io"] resources: ["pods", "namespaces", "infrastructures", "infrastructures/status"] verbs: ["get", "watch", "list"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: observability roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io Deploy the OpenTelemetry Collector instance with the OpenTelemetryCollector custom resource. apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: observability spec: mode: deployment serviceAccount: otel-collector-deployment config: receivers: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} opencensus: otlp: protocols: grpc: {} http: {} zipkin: {} processors: batch: {} k8sattributes: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] exporters: otlp: endpoint: "tempo-<example>-distributor:4317" 1 tls: insecure: true service: pipelines: traces: receivers: [jaeger, opencensus, otlp, zipkin] processors: [memory_limiter, k8sattributes, resourcedetection, batch] exporters: [otlp] 1 This points to the Gateway of the TempoStack instance deployed by using the <example> Tempo Operator. Set the environment variables in the container with your instrumented application. Name Description Default value OTEL_SERVICE_NAME Sets the value of the service.name resource attribute. "" OTEL_EXPORTER_OTLP_ENDPOINT Base endpoint URL for any signal type with an optionally specified port number. https://localhost:4317 OTEL_EXPORTER_OTLP_CERTIFICATE Path to the certificate file for the TLS credentials of the gRPC client. https://localhost:4317 OTEL_TRACES_SAMPLER Sampler to be used for traces. parentbased_always_on OTEL_EXPORTER_OTLP_PROTOCOL Transport protocol for the OTLP exporter. grpc OTEL_EXPORTER_OTLP_TIMEOUT Maximum time interval for the OTLP exporter to wait for each batch export. 10s OTEL_EXPORTER_OTLP_INSECURE Disables client transport security for gRPC requests. An HTTPS schema overrides it. False
[ "apiVersion: project.openshift.io/v1 kind: Project metadata: name: observability", "apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-sidecar namespace: observability", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: - apiGroups: [\"\", \"config.openshift.io\"] resources: [\"pods\", \"namespaces\", \"infrastructures\", \"infrastructures/status\"] verbs: [\"get\", \"watch\", \"list\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: otel-collector-sidecar namespace: observability roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io", "apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: observability spec: serviceAccount: otel-collector-sidecar mode: sidecar config: serviceAccount: otel-collector-sidecar receivers: otlp: protocols: grpc: {} http: {} processors: batch: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] timeout: 2s exporters: otlp: endpoint: \"tempo-<example>-gateway:8090\" 1 tls: insecure: true service: pipelines: traces: receivers: [otlp] processors: [memory_limiter, resourcedetection, batch] exporters: [otlp]", "apiVersion: project.openshift.io/v1 kind: Project metadata: name: observability", "apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment namespace: observability", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: - apiGroups: [\"\", \"config.openshift.io\"] resources: [\"pods\", \"namespaces\", \"infrastructures\", \"infrastructures/status\"] verbs: [\"get\", \"watch\", \"list\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: observability roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io", "apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: observability spec: mode: deployment serviceAccount: otel-collector-deployment config: receivers: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} opencensus: otlp: protocols: grpc: {} http: {} zipkin: {} processors: batch: {} k8sattributes: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] exporters: otlp: endpoint: \"tempo-<example>-distributor:4317\" 1 tls: insecure: true service: pipelines: traces: receivers: [jaeger, opencensus, otlp, zipkin] processors: [memory_limiter, k8sattributes, resourcedetection, batch] exporters: [otlp]" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/red_hat_build_of_opentelemetry/otel-sending-traces-and-metrics-to-otel-collector
Using jlink to customize Java runtime environment
Using jlink to customize Java runtime environment Red Hat build of OpenJDK 17 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/using_jlink_to_customize_java_runtime_environment/index
Chapter 2. Setting up a project and storage
Chapter 2. Setting up a project and storage 2.1. Navigating to the OpenShift AI dashboard Procedure How you open the OpenShift AI dashboard depends on your OpenShift environment: If you are using the Red Hat Developer Sandbox : After you log in to the Sandbox, click Getting Started Available services , and then, in the Red Hat OpenShift AI card, click Launch . If you are using your own OpenShift cluster : After you log in to the OpenShift console, click the application launcher icon on the header. When prompted, log in to the OpenShift AI dashboard by using your OpenShift credentials. OpenShift AI uses the same credentials as OpenShift for the dashboard, notebooks, and all other components. The OpenShift AI dashboard shows the Home page. Note You can navigate back to the OpenShift console by clicking the application launcher to access the OpenShift console. For now, stay in the OpenShift AI dashboard. step Setting up your data science project 2.2. Setting up your data science project To implement a data science workflow, you must create a data science project (as described in the following procedure). Projects allow you and your team to organize and collaborate on resources within separated namespaces. From a project you can create multiple workbenches, each with their own IDE environment (for example, JupyterLab), and each with their own connections and cluster storage. In addition, the workbenches can share models and data with pipelines and model servers. Prerequisites Before you begin, log in to Red Hat OpenShift AI . Procedure On the navigation menu, select Data Science Projects . This page lists any existing projects that you have access to. From this page, you can select an existing project (if any) or create a new one. Note It is possible to start a Jupyter notebook by clicking the Launch standalone workbench button, selecting a notebook image, and clicking Start server . However, it would be a one-off Jupyter notebook run in isolation. If you are using your own OpenShift cluster, click Create project . Note If you are using the Red Hat Developer Sandbox, you are provided with a default data science project (for example, myname-dev ). Select it and skip over the step to the Verification section. Enter a display name and description. Verification You can see your project's initial state. Individual tabs provide more information about the project components and project access permissions: Workbenches are instances of your development and experimentation environment. They typically contain IDEs, such as JupyterLab, RStudio, and Visual Studio Code. Pipelines contain the data science pipelines that are executed within the project. Models allow you to quickly serve a trained model for real-time inference. You can have multiple model servers per data science project. One model server can host multiple models. Cluster storage is a persistent volume that retains the files and data you're working on within a workbench. A workbench has access to one or more cluster storage instances. Connections contain configuration parameters that are required to connect to a data source, such as an S3 object bucket. Permissions define which users and groups can access the project. step Storing data with connections 2.3. Storing data with connections Add connections to workbenches to connect your project to data inputs and object storage buckets. A connection is a resource that contains the configuration parameters needed to connect to a data source or data sink, such as an AWS S3 object storage bucket. For this tutorial, you run a provided script that creates the following local Minio storage buckets for you: My Storage - Use this bucket for storing your models and data. You can reuse this bucket and its connection for your notebooks and model servers. Pipelines Artifacts - Use this bucket as storage for your pipeline artifacts. A pipeline artifacts bucket is required when you create a pipeline server. For this tutorial, create this bucket to separate it from the first storage bucket for clarity. Note While it is possible for you to use one storage bucket for both purposes (storing models and data as well as storing pipeline artifacts), this tutorial follows best practice and uses separate storage buckets for each purpose. The provided script also creates a connection to each storage bucket. To run the script that installs local MinIO storage buckets and creates connections to them, follow the steps in Running a script to install local object storage buckets and create connections . Note If you want to use your own S3-compatible object storage buckets (instead of using the provided script), follow the steps in Creating connections to your own S3-compatible object storage . 2.3.1. Running a script to install local object storage buckets and create connections For convenience, run a script (provided in the following procedure) that automatically completes these tasks: Creates a Minio instance in your project. Creates two storage buckets in that Minio instance. Generates a random user id and password for your Minio instance. Creates two connections in your project, one for each bucket and both using the same credentials. Installs required network policies for service mesh functionality. The script is based on a guide for deploying Minio . Important The Minio-based Object Storage that the script creates is not meant for production usage. Note If you want to connect to your own storage, see Creating connections to your own S3-compatible object storage . Prerequisites You must know the OpenShift resource name for your data science project so that you run the provided script in the correct project. To get the project's resource name: In the OpenShift AI dashboard, select Data Science Projects and then click the ? icon to the project name. A text box appears with information about the project, including its resource name: Note The following procedure describes how to run the script from the OpenShift console. If you are knowledgeable in OpenShift and can access the cluster from the command line, instead of following the steps in this procedure, you can use the following command to run the script: Procedure In the OpenShift AI dashboard, click the application launcher icon and then select the OpenShift Console option. In the OpenShift console, click + in the top navigation bar. Select your project from the list of projects. Verify that you selected the correct project. Copy the following code and paste it into the Import YAML editor. Note This code gets and applies the setup-s3-no-sa.yaml file. --- apiVersion: v1 kind: ServiceAccount metadata: name: demo-setup --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: demo-setup-edit roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: edit subjects: - kind: ServiceAccount name: demo-setup --- apiVersion: batch/v1 kind: Job metadata: name: create-s3-storage spec: selector: {} template: spec: containers: - args: - -ec - |- echo -n 'Setting up Minio instance and connections' oc apply -f https://github.com/rh-aiservices-bu/fraud-detection/raw/main/setup/setup-s3-no-sa.yaml command: - /bin/bash image: image-registry.openshift-image-registry.svc:5000/openshift/tools:latest imagePullPolicy: IfNotPresent name: create-s3-storage restartPolicy: Never serviceAccount: demo-setup serviceAccountName: demo-setup Click Create . Verification In the OpenShift console, you should see a "Resources successfully created" message and the following resources listed: demo-setup demo-setup-edit create-s3-storage In the OpenShift AI dashboard: Select Data Science Projects and then click the name of your project, Fraud detection . Click Connections . You should see two connections listed: My Storage and Pipeline Artifacts . step If you want to complete the pipelines section of this tutorial, go to Enabling data science pipelines . Otherwise, skip to Creating a workbench . 2.3.2. Creating connections to your own S3-compatible object storage If you have existing S3-compatible storage buckets that you want to use for this tutorial, you must create a connection to one storage bucket for saving your data and models. If you want to complete the pipelines section of this tutorial, create another connection to a different storage bucket for saving pipeline artifacts. Note If you do not have your own s3-compatible storage, or if you want to use a disposable local Minio instance instead, skip this section and follow the steps in Running a script to install local object storage buckets and create connections . The provided script automatically completes the following tasks for you: creates a Minio instance in your project, creates two storage buckets in that Minio instance, creates two connections in your project, one for each bucket and both using the same credentials, and installs required network policies for service mesh functionality. Prerequisites To create connections to your existing S3-compatible storage buckets, you need the following credential information for the storage buckets: Endpoint URL Access key Secret key Region Bucket name If you don't have this information, contact your storage administrator. Procedure Create a connection for saving your data and models: In the OpenShift AI dashboard, navigate to the page for your data science project. Click the Connections tab, and then click Create connection . In the Add connection modal, for the Connection type select S3 compatible object storage - v1 . Complete the Add connection form and name your connection My Storage . This connection is for saving your personal work, including data and models. Click Create . Create a connection for saving pipeline artifacts: Note If you do not intend to complete the pipelines section of the tutorial, you can skip this step. Click Add connection . Complete the form and name your connection Pipeline Artifacts . Click Create . Verification In the Connections tab for the project, check to see that your connections are listed. step If you want to complete the pipelines section of this tutorial, go to Enabling data science pipelines . Otherwise, skip to Creating a workbench . 2.4. Enabling data science pipelines Note If you do not intend to complete the pipelines section of this tutorial you can skip this step and move on to the section, Create a Workbench . In this section, you prepare your tutorial environment so that you can use data science pipelines. Later in this tutorial, you implement an example pipeline by using the JupyterLab Elyra extension. With Elyra, you can create a visual end-to-end pipeline workflow that can be executed in OpenShift AI. Prerequisites You have installed local object storage buckets and created connections, as described in Storing data with connections . Procedure In the OpenShift AI dashboard, on the Fraud Detection page, click the Pipelines tab. Click Configure pipeline server . In the Configure pipeline server form, in the Access key field to the key icon, click the dropdown menu and then click Pipeline Artifacts to populate the Configure pipeline server form with credentials for the connection. Leave the database configuration as the default. Click Configure pipeline server . Wait until the loading spinner disappears and Start by importing a pipeline is displayed. Important You must wait until the pipeline configuration is complete before you continue and create your workbench. If you create your workbench before the pipeline server is ready, your workbench will not be able to submit pipelines to it. If you have waited more than 5 minutes, and the pipeline server configuration does not complete, you can delete the pipeline server and create it again. You can also ask your OpenShift AI administrator to verify that self-signed certificates are added to your cluster as described in Working with certificates . Verification Navigate to the Pipelines tab for the project. to Import pipeline , click the action menu (...) and then select View pipeline server configuration . An information box opens and displays the object storage connection information for the pipeline server. step Creating a workbench and selecting a notebook image
[ "apply -n <your-project-name/> -f https://github.com/rh-aiservices-bu/fraud-detection/raw/main/setup/setup-s3.yaml", "--- apiVersion: v1 kind: ServiceAccount metadata: name: demo-setup --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: demo-setup-edit roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: edit subjects: - kind: ServiceAccount name: demo-setup --- apiVersion: batch/v1 kind: Job metadata: name: create-s3-storage spec: selector: {} template: spec: containers: - args: - -ec - |- echo -n 'Setting up Minio instance and connections' oc apply -f https://github.com/rh-aiservices-bu/fraud-detection/raw/main/setup/setup-s3-no-sa.yaml command: - /bin/bash image: image-registry.openshift-image-registry.svc:5000/openshift/tools:latest imagePullPolicy: IfNotPresent name: create-s3-storage restartPolicy: Never serviceAccount: demo-setup serviceAccountName: demo-setup" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html/openshift_ai_tutorial_-_fraud_detection_example/setting-up-a-project-and-storage
Chapter 7. Installation configuration parameters for the Agent-based Installer
Chapter 7. Installation configuration parameters for the Agent-based Installer Before you deploy an OpenShift Container Platform cluster using the Agent-based Installer, you provide parameters to customize your cluster and the platform that hosts it. When you create the install-config.yaml and agent-config.yaml files, you must provide values for the required parameters, and you can use the optional parameters to customize your cluster further. 7.1. Available installation configuration parameters The following tables specify the required and optional installation configuration parameters that you can set as part of the Agent-based installation process. These values are specified in the install-config.yaml file. Note These settings are used for installation only, and cannot be modified after installation. 7.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 7.1. Required parameters Parameter Description Values The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . When you do not provide metadata.name through either the install-config.yaml or agent-config.yaml files, for example when you use only ZTP manifests, the cluster name is set to agent-cluster . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . The configuration for the specific platform upon which to perform the installation: baremetal , external , none , or vsphere . Object Get a pull secret from Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 7.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. If you use the Red Hat OpenShift Networking OVN-Kubernetes network plugin, both IPv4 and IPv6 address families are supported. If you use the Red Hat OpenShift Networking OpenShift SDN network plugin, only the IPv4 address family is supported. If you configure your cluster to use both IP address families, review the following requirements: Both IP families must use the same network interface for the default gateway. Both IP families must have the default gateway. You must specify IPv4 and IPv6 addresses in the same order for all network configuration parameters. For example, in the following configuration IPv4 addresses are listed before IPv6 addresses. networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd00:10:128::/56 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd00:172:16::/112 Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 7.2. Network parameters Parameter Description Values The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. The Red Hat OpenShift Networking network plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI plugin for all-Linux networks. OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd01::/48 hostPrefix: 64 Required if you use networking.clusterNetwork . An IP address block. If you use the OpenShift SDN network plugin, specify an IPv4 network. If you use the OVN-Kubernetes network plugin, you can specify IPv4 and IPv6 networks. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . The prefix length for an IPv6 block is between 0 and 128 . For example, 10.128.0.0/14 or fd01::/48 . The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. For an IPv4 network the default value is 23 . For an IPv6 network the default value is 64 . The default value is also the minimum value for IPv6. The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. If you use the OVN-Kubernetes network plugin, you can specify an IP address block for both of the IPv4 and IPv6 address families. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 - fd02::/112 The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt and IBM Power(R) Virtual Server. For libvirt, the default value is 192.168.126.0/24 . For IBM Power(R) Virtual Server, the default value is 192.168.0.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 or fd00::/48 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 7.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 7.3. Optional parameters Parameter Description Values A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array Enables workload partitioning, which isolates OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. Workload partitioning can only be enabled during installation and cannot be disabled after installation. While this field enables workload partitioning, it does not configure workloads to use specific CPUs. For more information, see the Workload partitioning page in the Scalability and Performance section. None or AllNodes . None is the default value. The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 and arm64 . String Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use compute . The name of the machine pool. worker Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. baremetal , vsphere , or {} The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . The configuration for the machines that comprise the control plane. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 and arm64 . String Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use controlPlane . The name of the machine pool. master Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. baremetal , vsphere , or {} The number of control plane machines to provision. Supported values are 3 , or 1 when deploying single-node OpenShift. The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Mint , Passthrough , Manual or an empty string ( "" ). [1] Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String Specify one or more repositories that may also contain the same images. Array of strings How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the "Managing cloud provider credentials" entry in the Authentication and authorization content. 7.2. Available Agent configuration parameters The following tables specify the required and optional Agent configuration parameters that you can set as part of the Agent-based installation process. These values are specified in the agent-config.yaml file. Note These settings are used for installation only, and cannot be modified after installation. 7.2.1. Required configuration parameters Required Agent configuration parameters are described in the following table: Table 7.4. Required parameters Parameter Description Values The API version for the agent-config.yaml content. The current version is v1beta1 . The installation program might also support older API versions. String Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . The value entered in the agent-config.yaml file is ignored, and instead the value specified in the install-config.yaml file is used. When you do not provide metadata.name through either the install-config.yaml or agent-config.yaml files, for example when you use only ZTP manifests, the cluster name is set to agent-cluster . String of lowercase letters and hyphens ( - ), such as dev . 7.2.2. Optional configuration parameters Optional Agent configuration parameters are described in the following table: Table 7.5. Optional parameters Parameter Description Values The IP address of the node that performs the bootstrapping process as well as running the assisted-service component. You must provide the rendezvous IP address when you do not specify at least one host's IP address in the networkConfig parameter. If this address is not provided, one IP address is selected from the provided hosts' networkConfig . IPv4 or IPv6 address. The URL of the server to upload Preboot Execution Environment (PXE) assets to when using the Agent-based Installer to generate an iPXE script. For more information, see "Preparing PXE assets for OpenShift Container Platform". String. A list of Network Time Protocol (NTP) sources to be added to all cluster hosts, which are added to any NTP sources that are configured through other means. List of hostnames or IP addresses. Host configuration. An optional list of hosts. The number of hosts defined must not exceed the total number of hosts defined in the install-config.yaml file, which is the sum of the values of the compute.replicas and controlPlane.replicas parameters. An array of host configuration objects. Hostname. Overrides the hostname obtained from either the Dynamic Host Configuration Protocol (DHCP) or a reverse DNS lookup. Each host must have a unique hostname supplied by one of these methods, although configuring a hostname through this parameter is optional. String. Provides a table of the name and MAC address mappings for the interfaces on the host. If a NetworkConfig section is provided in the agent-config.yaml file, this table must be included and the values must match the mappings provided in the NetworkConfig section. An array of host configuration objects. The name of an interface on the host. String. The MAC address of an interface on the host. A MAC address such as the following example: 00-B0-D0-63-C2-26 . Defines whether the host is a master or worker node. If no role is defined in the agent-config.yaml file, roles will be assigned at random during cluster installation. master or worker . Enables provisioning of the Red Hat Enterprise Linux CoreOS (RHCOS) image to a particular device. The installation program examines the devices in the order it discovers them, and compares the discovered values with the hint values. It uses the first discovered device that matches the hint value. This is the device that the operating system is written on during installation. A dictionary of key-value pairs. For more information, see "Root device hints" in the "Setting up the environment for an OpenShift installation" page. The name of the device the RHCOS image is provisioned to. String. The host network definition. The configuration must match the Host Network Management API defined in the nmstate documentation . A dictionary of host network configuration objects. Additional resources Preparing PXE assets for OpenShift Container Platform Root device hints
[ "apiVersion:", "baseDomain:", "metadata:", "metadata: name:", "platform:", "pullSecret:", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd00:10:128::/56 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd00:172:16::/112", "networking:", "networking: networkType:", "networking: clusterNetwork:", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd01::/48 hostPrefix: 64", "networking: clusterNetwork: cidr:", "networking: clusterNetwork: hostPrefix:", "networking: serviceNetwork:", "networking: serviceNetwork: - 172.30.0.0/16 - fd02::/112", "networking: machineNetwork:", "networking: machineNetwork: - cidr: 10.0.0.0/16", "networking: machineNetwork: cidr:", "additionalTrustBundle:", "capabilities:", "capabilities: baselineCapabilitySet:", "capabilities: additionalEnabledCapabilities:", "cpuPartitioningMode:", "compute:", "compute: architecture:", "compute: hyperthreading:", "compute: name:", "compute: platform:", "compute: replicas:", "featureSet:", "controlPlane:", "controlPlane: architecture:", "controlPlane: hyperthreading:", "controlPlane: name:", "controlPlane: platform:", "controlPlane: replicas:", "credentialsMode:", "fips:", "imageContentSources:", "imageContentSources: source:", "imageContentSources: mirrors:", "publish:", "sshKey:", "apiVersion:", "metadata:", "metadata: name:", "rendezvousIP:", "bootArtifactsBaseURL:", "additionalNTPSources:", "hosts:", "hosts: hostname:", "hosts: interfaces:", "hosts: interfaces: name:", "hosts: interfaces: macAddress:", "hosts: role:", "hosts: rootDeviceHints:", "hosts: rootDeviceHints: deviceName:", "hosts: networkConfig:" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_an_on-premise_cluster_with_the_agent-based_installer/installation-config-parameters-agent
Chapter 1. Provisioning APIs
Chapter 1. Provisioning APIs 1.1. BMCEventSubscription [metal3.io/v1alpha1] Description BMCEventSubscription is the Schema for the fast eventing API Type object 1.2. BareMetalHost [metal3.io/v1alpha1] Description BareMetalHost is the Schema for the baremetalhosts API Type object 1.3. FirmwareSchema [metal3.io/v1alpha1] Description FirmwareSchema is the Schema for the firmwareschemas API Type object 1.4. HardwareData [metal3.io/v1alpha1] Description HardwareData is the Schema for the hardwaredata API Type object 1.5. HostFirmwareSettings [metal3.io/v1alpha1] Description HostFirmwareSettings is the Schema for the hostfirmwaresettings API Type object 1.6. Metal3Remediation [infrastructure.cluster.x-k8s.io/v1beta1] Description Metal3Remediation is the Schema for the metal3remediations API. Type object 1.7. Metal3RemediationTemplate [infrastructure.cluster.x-k8s.io/v1beta1] Description Metal3RemediationTemplate is the Schema for the metal3remediationtemplates API. Type object 1.8. PreprovisioningImage [metal3.io/v1alpha1] Description PreprovisioningImage is the Schema for the preprovisioningimages API Type object 1.9. Provisioning [metal3.io/v1alpha1] Description Provisioning contains configuration used by the Provisioning service (Ironic) to provision baremetal hosts. Provisioning is created by the OpenShift installer using admin or user provided information about the provisioning network and the NIC on the server that can be used to PXE boot it. This CR is a singleton, created by the installer and currently only consumed by the cluster-baremetal-operator to bring up and update containers in a metal3 cluster. Type object
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/provisioning_apis/provisioning-apis
Chapter 2. Image [image.openshift.io/v1]
Chapter 2. Image [image.openshift.io/v1] Description Image is an immutable representation of a container image and metadata at a point in time. Images are named by taking a hash of their contents (metadata and content) and any change in format, content, or metadata results in a new name. The images resource is primarily for use by cluster administrators and integrations like the cluster image registry - end users instead access images via the imagestreamtags or imagestreamimages resources. While image metadata is stored in the API, any integration that implements the container image registry API must provide its own storage for the raw manifest data, image config, and layer contents. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources dockerImageConfig string DockerImageConfig is a JSON blob that the runtime uses to set up the container. This is a part of manifest schema v2. Will not be set when the image represents a manifest list. dockerImageLayers array DockerImageLayers represents the layers in the image. May not be set if the image does not define that data or if the image represents a manifest list. dockerImageLayers[] object ImageLayer represents a single layer of the image. Some images may have multiple layers. Some may have none. dockerImageManifest string DockerImageManifest is the raw JSON of the manifest dockerImageManifestMediaType string DockerImageManifestMediaType specifies the mediaType of manifest. This is a part of manifest schema v2. dockerImageManifests array DockerImageManifests holds information about sub-manifests when the image represents a manifest list. When this field is present, no DockerImageLayers should be specified. dockerImageManifests[] object ImageManifest represents sub-manifests of a manifest list. The Digest field points to a regular Image object. dockerImageMetadata RawExtension DockerImageMetadata contains metadata about this image dockerImageMetadataVersion string DockerImageMetadataVersion conveys the version of the object, which if empty defaults to "1.0" dockerImageReference string DockerImageReference is the string that can be used to pull this image. dockerImageSignatures array (string) DockerImageSignatures provides the signatures as opaque blobs. This is a part of manifest schema v1. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata signatures array Signatures holds all signatures of the image. signatures[] object ImageSignature holds a signature of an image. It allows to verify image identity and possibly other claims as long as the signature is trusted. Based on this information it is possible to restrict runnable images to those matching cluster-wide policy. Mandatory fields should be parsed by clients doing image verification. The others are parsed from signature's content by the server. They serve just an informative purpose. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). 2.1.1. .dockerImageLayers Description DockerImageLayers represents the layers in the image. May not be set if the image does not define that data or if the image represents a manifest list. Type array 2.1.2. .dockerImageLayers[] Description ImageLayer represents a single layer of the image. Some images may have multiple layers. Some may have none. Type object Required name size mediaType Property Type Description mediaType string MediaType of the referenced object. name string Name of the layer as defined by the underlying store. size integer Size of the layer in bytes as defined by the underlying store. 2.1.3. .dockerImageManifests Description DockerImageManifests holds information about sub-manifests when the image represents a manifest list. When this field is present, no DockerImageLayers should be specified. Type array 2.1.4. .dockerImageManifests[] Description ImageManifest represents sub-manifests of a manifest list. The Digest field points to a regular Image object. Type object Required digest mediaType manifestSize architecture os Property Type Description architecture string Architecture specifies the supported CPU architecture, for example amd64 or ppc64le . digest string Digest is the unique identifier for the manifest. It refers to an Image object. manifestSize integer ManifestSize represents the size of the raw object contents, in bytes. mediaType string MediaType defines the type of the manifest, possible values are application/vnd.oci.image.manifest.v1+json, application/vnd.docker.distribution.manifest.v2+json or application/vnd.docker.distribution.manifest.v1+json. os string OS specifies the operating system, for example linux . variant string Variant is an optional field repreenting a variant of the CPU, for example v6 to specify a particular CPU variant of the ARM CPU. 2.1.5. .signatures Description Signatures holds all signatures of the image. Type array 2.1.6. .signatures[] Description ImageSignature holds a signature of an image. It allows to verify image identity and possibly other claims as long as the signature is trusted. Based on this information it is possible to restrict runnable images to those matching cluster-wide policy. Mandatory fields should be parsed by clients doing image verification. The others are parsed from signature's content by the server. They serve just an informative purpose. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required type content Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources conditions array Conditions represent the latest available observations of a signature's current state. conditions[] object SignatureCondition describes an image signature condition of particular kind at particular probe time. content string Required: An opaque binary string which is an image's signature. created Time If specified, it is the time of signature's creation. imageIdentity string A human readable string representing image's identity. It could be a product name and version, or an image pull spec (e.g. "registry.access.redhat.com/rhel7/rhel:7.2"). issuedBy object SignatureIssuer holds information about an issuer of signing certificate or key. issuedTo object SignatureSubject holds information about a person or entity who created the signature. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata signedClaims object (string) Contains claims from the signature. type string Required: Describes a type of stored blob. 2.1.7. .signatures[].conditions Description Conditions represent the latest available observations of a signature's current state. Type array 2.1.8. .signatures[].conditions[] Description SignatureCondition describes an image signature condition of particular kind at particular probe time. Type object Required type status Property Type Description lastProbeTime Time Last time the condition was checked. lastTransitionTime Time Last time the condition transit from one status to another. message string Human readable message indicating details about last transition. reason string (brief) reason for the condition's last transition. status string Status of the condition, one of True, False, Unknown. type string Type of signature condition, Complete or Failed. 2.1.9. .signatures[].issuedBy Description SignatureIssuer holds information about an issuer of signing certificate or key. Type object Property Type Description commonName string Common name (e.g. openshift-signing-service). organization string Organization name. 2.1.10. .signatures[].issuedTo Description SignatureSubject holds information about a person or entity who created the signature. Type object Required publicKeyID Property Type Description commonName string Common name (e.g. openshift-signing-service). organization string Organization name. publicKeyID string If present, it is a human readable key id of public key belonging to the subject used to verify image signature. It should contain at least 64 lowest bits of public key's fingerprint (e.g. 0x685ebe62bf278440). 2.2. API endpoints The following API endpoints are available: /apis/image.openshift.io/v1/images DELETE : delete collection of Image GET : list or watch objects of kind Image POST : create an Image /apis/image.openshift.io/v1/watch/images GET : watch individual changes to a list of Image. deprecated: use the 'watch' parameter with a list operation instead. /apis/image.openshift.io/v1/images/{name} DELETE : delete an Image GET : read the specified Image PATCH : partially update the specified Image PUT : replace the specified Image /apis/image.openshift.io/v1/watch/images/{name} GET : watch changes to an object of kind Image. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 2.2.1. /apis/image.openshift.io/v1/images Table 2.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of Image Table 2.2. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 2.3. Body parameters Parameter Type Description body DeleteOptions schema Table 2.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind Image Table 2.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 2.6. HTTP responses HTTP code Reponse body 200 - OK ImageList schema 401 - Unauthorized Empty HTTP method POST Description create an Image Table 2.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.8. Body parameters Parameter Type Description body Image schema Table 2.9. HTTP responses HTTP code Reponse body 200 - OK Image schema 201 - Created Image schema 202 - Accepted Image schema 401 - Unauthorized Empty 2.2.2. /apis/image.openshift.io/v1/watch/images Table 2.10. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of Image. deprecated: use the 'watch' parameter with a list operation instead. Table 2.11. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 2.2.3. /apis/image.openshift.io/v1/images/{name} Table 2.12. Global path parameters Parameter Type Description name string name of the Image Table 2.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an Image Table 2.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 2.15. Body parameters Parameter Type Description body DeleteOptions schema Table 2.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Image Table 2.17. HTTP responses HTTP code Reponse body 200 - OK Image schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Image Table 2.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 2.19. Body parameters Parameter Type Description body Patch schema Table 2.20. HTTP responses HTTP code Reponse body 200 - OK Image schema 201 - Created Image schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Image Table 2.21. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.22. Body parameters Parameter Type Description body Image schema Table 2.23. HTTP responses HTTP code Reponse body 200 - OK Image schema 201 - Created Image schema 401 - Unauthorized Empty 2.2.4. /apis/image.openshift.io/v1/watch/images/{name} Table 2.24. Global path parameters Parameter Type Description name string name of the Image Table 2.25. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind Image. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 2.26. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/image_apis/image-image-openshift-io-v1
8.14. Creating a Network Team Using a GUI
8.14. Creating a Network Team Using a GUI 8.14.1. Establishing a Team Connection You can use nm-connection-editor to direct NetworkManager to create a team from two or more Wired or InfiniBand connections. It is not necessary to create the connections to be teamed first. They can be configured as part of the process to configure the team. You must have the MAC addresses of the interfaces available in order to complete the configuration process. Procedure 8.1. Adding a New Team Connection Using nm-connection-editor Follow the below steps to add a new team connection. Enter nm-connection-editor in a terminal: Click the Add button. The Choose a Connection Type window appears. Select Team and click Create . The Editing Team connection 1 window appears. Figure 8.6. The NetworkManager Graphical User Interface Add a menu On the Team tab, click Add and select the type of interface you want to use with the team connection. Click the Create button. Note that the dialog to select the port type only comes up when you create the first port; after that, it will automatically use that same type for all further ports. The Editing team0 slave 1 window appears. Figure 8.7. The NetworkManager Graphical User Interface Add a Slave Connection If custom port settings are to be applied, click on the Team Port tab and enter a JSON configuration string or import it from a file. Click the Save button. The name of the teamed port appears in the Teamed connections window. Click the Add button to add further port connections. Review and confirm the settings and then click the Save button. Edit the team-specific settings by referring to Section 8.14.1.1, "Configuring the Team Tab" below. Procedure 8.2. Editing an Existing Team Connection Follow the below steps to edit an existing team connection. Enter nm-connection-editor in a terminal: Select the connection you want to edit and click the Edit button. Select the General tab. Five settings in the Editing dialog are common to the most connection types. See the General tab: Connection name - Enter a descriptive name for your network connection. This name is used to list this connection in the menu of the Network window. Connection priority for auto-activation - If the connection is set to autoconnect, the number is activated ( 0 by default). The higher number means higher priority. Automatically connect to this network when it is available - Select this box if you want NetworkManager to auto-connect to this connection when it is available. See the section called "Editing an Existing Connection with control-center" for more information. All users may connect to this network - Select this box to create a connection available to all users on the system. Changing this setting may require root privileges. See Section 3.4.5, "Managing System-wide and Private Connection Profiles with a GUI" for details. Automatically connect to VPN when using this connection - Select this box if you want NetworkManager to auto-connect to a VPN connection when it is available. Select the VPN from the drop-down menu. Firewall Zone - Select the firewall zone from the drop-down menu. See the Red Hat Enterprise Linux 7 Security Guide for more information on firewall zones. Edit the team-specific settings by referring to Section 8.14.1.1, "Configuring the Team Tab" below. Saving Your New (or Modified) Connection and Making Further Configurations Once you have finished editing your team connection, click the Save button to save your customized configuration. Then, to configure: IPv4 settings for the connection, click the IPv4 Settings tab and proceed to Section 5.4, "Configuring IPv4 Settings" or IPv6 settings for the connection, click the IPv6 Settings tab and proceed to Section 5.5, "Configuring IPv6 Settings" . 8.14.1.1. Configuring the Team Tab If you have already added a new team connection you can enter a custom JSON configuration string in the text box or import a configuration file. Click Save to apply the JSON configuration to the team interface. For examples of JSON strings, see Section 8.13, "Configure teamd Runners" See Procedure 8.1, "Adding a New Team Connection Using nm-connection-editor" for instructions on how to add a new team.
[ "~]USD nm-connection-editor", "~]USD nm-connection-editor" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/sec-creating_a_network_team_using_a_gui
7.16. bind
7.16. bind 7.16.1. RHBA-2015:1250 - bind bug fix and enhancement update Updated bind packages that fix several bugs and add one enhancement are now available for Red Hat Enterprise Linux 6. BIND (Berkeley Internet Name Domain) is an implementation of the Domain Name System (DNS) protocols. BIND includes a DNS server (named), which resolves host names to IP addresses, a resolver library (routines for applications to use when interfacing with DNS), and tools for verifying that the DNS server is operating correctly. Bug Fixes BZ# 1112356 Previously, the "slip" option was not handled correctly in the Response Rate Limiting (RRL) code in BIND, and the variable counting the number of queries was not reset after each query, but after every other query. As a consequence, when the "slip" value of the RRL feature was set to one, instead of slipping every query, every other query was dropped. To fix this bug, the RRL code has been amended to reset the variable correctly according to the configuration. Now, when the "slip" value of the RRL feature is set to one, every query is slipped as expected. BZ# 1142152 BIND incorrectly handled errors returned by dynamic databases (from dyndbAPI). Consequently, BIND could enter a deadlock situation on shutdown under certain circumstances. The dyndb API has been fixed not to cause a deadlock during BIND shutdown after the dynamic database returns an error, and BIND now shuts down normally in the described situation. BZ# 1146893 Because the Simplified Database Backend (SDB) application interface did not handle unexpected SDB database driver errors properly, BIND used with SDB could terminate unexpectedly when such errors occurred. With this update, the SDB application interface has been cleaned to handle these errors correctly, and BIND used with SDB no longer crashes if they happen. BZ# 1175321 Due to a race condition in the beginexclusive() function, the BIND DNS server (named) could terminate unexpectedly while loading configuration. To fix this bug, a patch has been applied, and the race condition no longer occurs. BZ# 1215687 Previously, when the resolver was under heavy load, some clients could receive a SERVFAIL response from the server and numerous "out of memory/success" log messages in BIND's log. Also, cached records with low TTL (1) could expire prematurely. Internal hardcoded limits in the resolver have been increased, and conditions for expiring cached records with low TTL (1) have been made stricter. This prevents the resolver from reaching the limits when under heavy load, and the "out of memory/success" log messages from being received. Cached records with low TTL (1) no longer expire prematurely. Enhancement BZ# 1176476 Users can now use RPZ-NSIP and RPZ-NSDNAME records with Response Policy Zone (RPZ) in the BIND configuration. Users of BIND are advised to upgrade to these updated packages, which fix these bugs and add this enhancement. After installing the update, the BIND daemon (named) will be restarted automatically.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-bind
Chapter 9. Optimizing storage
Chapter 9. Optimizing storage Optimizing storage helps to minimize storage use across all resources. By optimizing storage, administrators help ensure that existing storage resources are working in an efficient manner. 9.1. Available persistent storage options Understand your persistent storage options so that you can optimize your OpenShift Container Platform environment. Table 9.1. Available storage options Storage type Description Examples Block Presented to the operating system (OS) as a block device Suitable for applications that need full control of storage and operate at a low level on files bypassing the file system Also referred to as a Storage Area Network (SAN) Non-shareable, which means that only one client at a time can mount an endpoint of this type AWS EBS and VMware vSphere support dynamic persistent volume (PV) provisioning natively in OpenShift Container Platform. File Presented to the OS as a file system export to be mounted Also referred to as Network Attached Storage (NAS) Concurrency, latency, file locking mechanisms, and other capabilities vary widely between protocols, implementations, vendors, and scales. RHEL NFS, NetApp NFS [1] , and Vendor NFS Object Accessible through a REST API endpoint Configurable for use in the OpenShift image registry Applications must build their drivers into the application and/or container. AWS S3 NetApp NFS supports dynamic PV provisioning when using the Trident plugin. 9.2. Recommended configurable storage technology The following table summarizes the recommended and configurable storage technologies for the given OpenShift Container Platform cluster application. Table 9.2. Recommended and configurable storage technology Storage type Block File Object 1 ReadOnlyMany 2 ReadWriteMany 3 Prometheus is the underlying technology used for metrics. 4 This does not apply to physical disk, VM physical disk, VMDK, loopback over NFS, AWS EBS, and Azure Disk. 5 For metrics, using file storage with the ReadWriteMany (RWX) access mode is unreliable. If you use file storage, do not configure the RWX access mode on any persistent volume claims (PVCs) that are configured for use with metrics. 6 For logging, review the recommended storage solution in Configuring persistent storage for the log store section. Using NFS storage as a persistent volume or through NAS, such as Gluster, can corrupt the data. Hence, NFS is not supported for Elasticsearch storage and LokiStack log store in OpenShift Container Platform Logging. You must use one persistent volume type per log store. 7 Object storage is not consumed through OpenShift Container Platform's PVs or PVCs. Apps must integrate with the object storage REST API. ROX 1 Yes 4 Yes 4 Yes RWX 2 No Yes Yes Registry Configurable Configurable Recommended Scaled registry Not configurable Configurable Recommended Metrics 3 Recommended Configurable 5 Not configurable Elasticsearch Logging Recommended Configurable 6 Not supported 6 Loki Logging Configurable Not configurable Recommended Apps Recommended Recommended Not configurable 7 Note A scaled registry is an OpenShift image registry where two or more pod replicas are running. 9.2.1. Specific application storage recommendations Important Testing shows issues with using the NFS server on Red Hat Enterprise Linux (RHEL) as storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended. Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components. 9.2.1.1. Registry In a non-scaled/high-availability (HA) OpenShift image registry cluster deployment: The storage technology does not have to support RWX access mode. The storage technology must ensure read-after-write consistency. The preferred storage technology is object storage followed by block storage. File storage is not recommended for OpenShift image registry cluster deployment with production workloads. 9.2.1.2. Scaled registry In a scaled/HA OpenShift image registry cluster deployment: The storage technology must support RWX access mode. The storage technology must ensure read-after-write consistency. The preferred storage technology is object storage. Red Hat OpenShift Data Foundation (ODF), Amazon Simple Storage Service (Amazon S3), Google Cloud Storage (GCS), Microsoft Azure Blob Storage, and OpenStack Swift are supported. Object storage should be S3 or Swift compliant. For non-cloud platforms, such as vSphere and bare metal installations, the only configurable technology is file storage. Block storage is not configurable. 9.2.1.3. Metrics In an OpenShift Container Platform hosted metrics cluster deployment: The preferred storage technology is block storage. Object storage is not configurable. Important It is not recommended to use file storage for a hosted metrics cluster deployment with production workloads. 9.2.1.4. Logging In an OpenShift Container Platform hosted logging cluster deployment: The preferred storage technology is block storage. Object storage is not configurable. 9.2.1.5. Applications Application use cases vary from application to application, as described in the following examples: Storage technologies that support dynamic PV provisioning have low mount time latencies, and are not tied to nodes to support a healthy cluster. Application developers are responsible for knowing and understanding the storage requirements for their application, and how it works with the provided storage to ensure that issues do not occur when an application scales or interacts with the storage layer. 9.2.2. Other specific application storage recommendations Important It is not recommended to use RAID configurations on Write intensive workloads, such as etcd . If you are running etcd with a RAID configuration, you might be at risk of encountering performance issues with your workloads. Red Hat OpenStack Platform (RHOSP) Cinder: RHOSP Cinder tends to be adept in ROX access mode use cases. Databases: Databases (RDBMSs, NoSQL DBs, etc.) tend to perform best with dedicated block storage. The etcd database must have enough storage and adequate performance capacity to enable a large cluster. Information about monitoring and benchmarking tools to establish ample storage and a high-performance environment is described in Recommended etcd practices . 9.3. Data storage management The following table summarizes the main directories that OpenShift Container Platform components write data to. Table 9.3. Main directories for storing OpenShift Container Platform data Directory Notes Sizing Expected growth /var/log Log files for all components. 10 to 30 GB. Log files can grow quickly; size can be managed by growing disks or by using log rotate. /var/lib/etcd Used for etcd storage when storing the database. Less than 20 GB. Database can grow up to 8 GB. Will grow slowly with the environment. Only storing metadata. Additional 20-25 GB for every additional 8 GB of memory. /var/lib/containers This is the mount point for the CRI-O runtime. Storage used for active container runtimes, including pods, and storage of local images. Not used for registry storage. 50 GB for a node with 16 GB memory. Note that this sizing should not be used to determine minimum cluster requirements. Additional 20-25 GB for every additional 8 GB of memory. Growth is limited by capacity for running containers. /var/lib/kubelet Ephemeral volume storage for pods. This includes anything external that is mounted into a container at runtime. Includes environment variables, kube secrets, and data volumes not backed by persistent volumes. Varies Minimal if pods requiring storage are using persistent volumes. If using ephemeral storage, this can grow quickly. 9.4. Optimizing storage performance for Microsoft Azure OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes. For production Azure clusters and clusters with intensive workloads, the virtual machine operating system disk for control plane machines should be able to sustain a tested and recommended minimum throughput of 5000 IOPS / 200MBps. This throughput can be provided by having a minimum of 1 TiB Premium SSD (P30). In Azure and Azure Stack Hub, disk performance is directly dependent on SSD disk sizes. To achieve the throughput supported by a Standard_D8s_v3 virtual machine, or other similar machine types, and the target of 5000 IOPS, at least a P30 disk is required. Host caching must be set to ReadOnly for low latency and high IOPS and throughput when reading data. Reading data from the cache, which is present either in the VM memory or in the local SSD disk, is much faster than reading from the disk, which is in the blob storage. 9.5. Additional resources Configuring the Elasticsearch log store
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/scalability_and_performance/optimizing-storage
Chapter 7. File System
Chapter 7. File System XFS scalability The XFS file system is currently supported in Red Hat Enterprise Linux 6 and is well suited for very large files and file systems on a single host. Integrated backup and restore, direct I/O and online resizing of the file system are some of the benefits that this file system provides. The XFS implementation has been improved to better handle metadata intensive workloads. An example of this type of workload is accessing thousands of small files in a directory. Prior to this enhancement, metadata processing could cause a bottleneck and lead to degraded performance. To address this problem an option to delay the logging of the metadata has been added that provides a significant performance improvement. As a result of this delayed logging of metadata, XFS performance is on par with ext4 for such workloads. The default mount options have also been updated to use delayed logging. Parallel NFS Parallel NFS (pNFS) is a part of the NFS v4.1 standard that allows clients to access storage devices directly and in parallel. The pNFS architecture eliminates the scalability and performance issues associated with NFS servers in deployment today. pNFS supports 3 different storage protocols or layouts: files, objects and blocks. The Red Hat Enterprise Linux 6.2 NFS client supports the files layout protocol. To automatically enable the pNFS functionality, create the /etc/modprobe.d/dist-nfsv41.conf file with the following line and reboot the system: Now when the -o minorversion=1 mount option is specified, and the server is pNFS-enabled, the pNFS client code is automatically enabled. This feature is a Technology Preview. For more information on pNFS, refer to http://www.pnfs.com/ . Asynchronous writes in CIFS The CIFS (Common Internet File System) protocol allows for a unified way to access remote files on disparate operating systems. The CIFS client has traditionally only allowed for synchronous writes. This meant that the client process would not yield back control until the writes were successfully completed. This can lead to degraded performance for large transactions that take long to complete. The CIFS client has been updated to write data in parallel without the need to wait for the sequential writes. This change can now result in performance improvements up to 200%. CIFS NTLMSSP authentication Support for NTLMSSP authentication has been added to CIFS. In addition, CIFS now uses the kernel's crypto API. autofs4 module The autofs4 module has been updated to kernel version 2.6.38. Fixed tracepoints for ext3 and jbd Fixed tracepoints have been added to ext3 and jbd . Mount options in superblock Support for the -o nobarrier mount option in ext4 , and its utilities: tune2fs , debugfs , libext2fs , has been added.
[ "alias nfs-layouttype4-1 nfs_layout_nfsv41_files" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_release_notes/filesystem
Part I. Developing Applications
Part I. Developing Applications
null
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/tooling_user_guide/riderdevpart
10.2. Configure 802.1Q VLAN tagging Using the Text User Interface, nmtui
10.2. Configure 802.1Q VLAN tagging Using the Text User Interface, nmtui The text user interface tool nmtui can be used to configure 802.1Q VLANs in a terminal window. Issue the following command to start the tool: The text user interface appears. Any invalid command prints a usage message. To navigate, use the arrow keys or press Tab to step forwards and press Shift + Tab to step back through the options. Press Enter to select an option. The Space bar toggles the status of a check box. From the starting menu, select Edit a connection . Select Add , the New Connection screen opens. Figure 10.1. The NetworkManager Text User Interface Add a VLAN Connection menu Select VLAN , the Edit connection screen opens. Follow the on-screen prompts to complete the configuration. Figure 10.2. The NetworkManager Text User Interface Configuring a VLAN Connection menu See Section 10.5.1.1, "Configuring the VLAN Tab" for definitions of the VLAN terms. See Section 3.2, "Configuring IP Networking with nmtui" for information on installing nmtui .
[ "~]USD nmtui" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/networking_guide/sec-configure_802_1q_vlan_tagging_using_the_text_user_interface_nmtui
Chapter 8. Basic Red Hat Ceph Storage client setup
Chapter 8. Basic Red Hat Ceph Storage client setup As a storage administrator, you have to set up client machines with basic configuration to interact with the storage cluster. Most client machines only need the ceph-common package and its dependencies installed. It will supply the basic ceph and rados commands, as well as other commands like mount.ceph and rbd . 8.1. Configuring file setup on client machines Client machines generally need a smaller configuration file than a full-fledged storage cluster member. You can generate a minimal configuration file which can give details to clients to reach the Ceph monitors. Prerequisites A running Red Hat Ceph Storage cluster. Root access to the nodes. Procedure On the node where you want to set up the files, create a directory ceph in the /etc folder: Example Navigate to /etc/ceph directory: Example Generate the configuration file in the ceph directory: Example The contents of this file should be installed in /etc/ceph/ceph.conf path. You can use this configuration file to reach the Ceph monitors. 8.2. Setting-up keyring on client machines Most Ceph clusters are run with the authentication enabled, and the client needs the keys in order to communicate with cluster machines. You can generate the keyring which can give details to clients to reach the Ceph monitors. Prerequisites A running Red Hat Ceph Storage cluster. Root access to the nodes. Procedure On the node where you want to set up the keyring, create a directory ceph in the /etc folder: Example Navigate to /etc/ceph directory in the ceph directory: Example Generate the keyring for the client: Syntax Example Verify the output in the ceph.keyring file: Example The resulting output should be put into a keyring file, for example /etc/ceph/ceph.keyring .
[ "mkdir /etc/ceph/", "cd /etc/ceph/", "ceph config generate-minimal-conf minimal ceph.conf for 417b1d7a-a0e6-11eb-b940-001a4a000740 [global] fsid = 417b1d7a-a0e6-11eb-b940-001a4a000740 mon_host = [v2:10.74.249.41:3300/0,v1:10.74.249.41:6789/0]", "mkdir /etc/ceph/", "cd /etc/ceph/", "ceph auth get-or-create client. CLIENT_NAME -o /etc/ceph/ NAME_OF_THE_FILE", "ceph auth get-or-create client.fs -o /etc/ceph/ceph.keyring", "cat ceph.keyring [client.fs] key = AQAvoH5gkUCsExAATz3xCBLd4n6B6jRv+Z7CVQ==" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/operations_guide/basic-red-hat-ceph-storage-client-setup
Sandboxed Containers Support for OpenShift
Sandboxed Containers Support for OpenShift OpenShift Container Platform 4.12 OpenShift sandboxed containers guide Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html-single/sandboxed_containers_support_for_openshift/index
probe::signal.check_ignored.return
probe::signal.check_ignored.return Name probe::signal.check_ignored.return - Check to see signal is ignored completed Synopsis signal.check_ignored.return Values name Name of the probe point retstr Return value as a string
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-signal-check-ignored-return
Chapter 15. Determining Certificate System product version
Chapter 15. Determining Certificate System product version The Red Hat Certificate System product version is stored in the /usr/share/pki/CS_SERVER_VERSION file. To display the version: To find the product version of a running server, access the following URLs from your browser: http:// host_name :_port_number_/ca/admin/ca/getStatus http:// host_name :_port_number_/kra/admin/kra/getStatus http:// host_name :_port_number_/ocsp/admin/ocsp/getStatus http:// host_name :_port_number_/tks/admin/tks/getStatus http:// host_name :_port_number_/tps/admin/tps/getStatus Note Note that each component is a separate package and thus could have a separate version number. The above will show the version number for each currently running component.
[ "cat /usr/share/pki/CS_SERVER_VERSION Red Hat Certificate System {Version} (Batch Update 1)" ]
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide_common_criteria_edition/determining_certificate_system_product_version
Chapter 9. Configure disk encryption
Chapter 9. Configure disk encryption 9.1. Configuring Network-Bound Disk Encryption key servers Prerequisites You must have installed a Network-Bound Disk Encryption key server ( Installing Network-Bound Disk Encryption key servers ). Procedure Start and enable the tangd service: Run the following command on each Network-Bound Disk Encryption (NBDE) key server. Verify that hyperconverged hosts have access to the key server. Log in to a hyperconverged host. Request a decryption key from the key server. If you see output like the following, the key server is accessible and advertising keys correctly. 9.2. Configuring hyperconverged hosts as Network-Bound Disk Encryption clients 9.2.1. Defining disk encryption configuration details Log in to the first hyperconverged host. Change into the hc-ansible-deployment directory: Make a copy of the luks_tang_inventory.yml file for future reference. Define your configuration in the luks_tang_inventory.yml file. Use the example luks_tang_inventory.yml file to define the details of disk encryption on each host. A complete outline of this file is available in Understanding the luks_tang_inventory.yml file . Encrypt the luks_tang_inventory.yml file and specify a password using ansible-vault . The required variables in luks_tang_inventory.yml include password values, so it is important to encrypt the file to protect the password values. Enter and confirm a new vault password when prompted. 9.2.2. Executing the disk encryption configuration playbook Prerequisites Define configuration in the luks_tang_inventory.yml playbook: Section 9.2.1, "Defining disk encryption configuration details" . Hyperconverged hosts must have encrypted boot disks. Procedure Log in to the first hyperconverged host. Change into the hc-ansible-deployment directory. Run the following command as the root user to start the configuration process. Enter the vault password for this file when prompted to start disk encryption configuration. Verify Reboot each host and verify that they are able to boot to a login prompt without requiring manual entry of the decryption passphrase. Note that the devices that use disk encryption have a path of /dev/mapper/luks_sdX when you continue with Red Hat Hyperconverged Infrastructure for Virtualization setup. Troubleshooting The given boot device /dev/sda2 is not encrypted. Solution: Reinstall the hyperconverged hosts using the process outlined in Section 5.1, "Installing hyperconverged hosts" , ensuring that you select Encrypt my data during the installation process and follow all directives related to disk encryption. The output has been hidden due to the fact that no_log: true was specified for this result. This output has been censored in order to not expose a passphrase. If you see this output for the Encrypt devices using key file task, the device failed to encrypt. You may have provided the incorrect disk in the inventory file. Solution: Clean up the deployment attempt using Cleaning up Network-Bound Disk Encryption after a failed deployment . Then correct the disk names in the inventory file. Non-zero return code from Tang server This error indicates that the server cannot access the url provided, either because the FQDN provided is incorrect or because it cannot be found from the host. Solution: Correct the url value provided for the NBDE key server or ensure that the url value is accessible from the host. Then run the playbook again with the bindtang tag: For any other playbook failures, use the instructions in Cleaning up Network-Bound Disk Encryption after a failed deployment to clean up your deployment. Review the playbook and inventory files for incorrect values and test access to all servers before executing the configuration playbook again.
[ "systemctl enable tangd.socket --now", "curl key-server.example.com /adv", "{\"payload\":\"eyJrZXlzIjpbeyJhbGciOiJFQ01SIiwiY3J2IjoiUC01MjEiLCJrZXlfb3BzIjpbImRlcml2ZUtleSJdLCJrdHkiOiJFQyIsIngiOiJBQ2ZjNVFwVmlhal9wNWcwUlE4VW52dmdNN1AyRTRqa21XUEpSM3VRUkFsVWp0eWlfZ0Y5WEV3WmU5TmhIdHhDaG53OXhMSkphajRieVk1ZVFGNGxhcXQ2IiwieSI6IkFOMmhpcmNpU2tnWG5HV2VHeGN1Nzk3N3B3empCTzZjZWt5TFJZdlh4SkNvb3BfNmdZdnR2bEpJUk4wS211Y1g3WHUwMlNVWlpqTVVxU3EtdGwyeEQ1SGcifSx7ImFsZyI6IkVTNTEyIiwiY3J2IjoiUC01MjEiLCJrZXlfb3BzIjpbInZlcmlmeSJdLCJrdHkiOiJFQyIsIngiOiJBQXlXeU8zTTFEWEdIaS1PZ04tRFhHU29yNl9BcUlJdzQ5OHhRTzdMam1kMnJ5bDN2WUFXTUVyR1l2MVhKdzdvbEhxdEdDQnhqV0I4RzZZV09vLWRpTUxwIiwieSI6IkFVWkNXUTAxd3lVMXlYR2R0SUMtOHJhVUVadWM5V3JyekFVbUIyQVF5VTRsWDcxd1RUWTJEeDlMMzliQU9tVk5oRGstS2lQNFZfYUlsZDFqVl9zdHRuVGoifV19\",\"protected\":\"eyJhbGciOiJFUzUxMiIsImN0eSI6Imp3ay1zZXQranNvbiJ9\",\"signature\":\"ARiMIYnCj7-1C-ZAQ_CKee676s_vYpi9J94WBibroou5MRsO6ZhRohqh_SCbW1jWWJr8btymTfQgBF_RwzVNCnllAXt_D5KSu8UDc4LnKU-egiV-02b61aiWB0udiEfYkF66krIajzA9y5j7qTdZpWsBObYVvuoJvlRo_jpzXJv0qEMi\"}", "cd /etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment", "cp luks_tang_inventory.yml luks_tang_inventory.yml.backup", "ansible-vault encrypt luks_tang_inventory.yml", "cd /etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment", "ansible-playbook -i luks_tang_inventory.yml tasks/luks_tang_setup.yml --tags=blacklistdevices,luksencrypt,bindtang --ask-vault-pass", "TASK [Check if root device is encrypted] fatal: [server1.example.com]: FAILED! => {\"changed\": false, \"msg\": \" The given boot device /dev/sda2 is not encrypted. \"}", "TASK [gluster.infra/roles/backend_setup : Encrypt devices using key file ] failed: [host1.example.com] (item=None) => {\"censored\": \" the output has been hidden due to the fact that no_log: true was specified for this result \", \"changed\": true}", "TASK [gluster.infra/roles/backend_setup : Download the advertisement from tang server for IPv4] * failed: [host1.example.com] (item={ url : http://tang-server.example.com }) => {\"ansible_index_var\": \"index\", \"ansible_loop_var\": \"item\", \"changed\": true, \"cmd\": \"curl -sfg \\\"http://tang-server.example.com/adv\\\" -o /etc/adv0.jws\", \"delta\": \"0:02:08.703711\", \"end\": \"2020-06-10 18:18:09.853701\", \"index\": 0, \"item\": {\"url\": \" http://tang-server.example.com \"}, \"msg\": \" non-zero return code *\", \"rc\": 7, \"start\": \"2020-06-10 18:16:01.149990\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "ansible-playbook -i luks_tang_inventory.yml tasks/luks_tang_setup.yml --ask-vault-pass --tags=bindtang" ]
https://docs.redhat.com/en/documentation/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/deploying_red_hat_hyperconverged_infrastructure_for_virtualization/assembly_configure-disk-encryption
Chapter 2. Other installable plugins
Chapter 2. Other installable plugins The following Technology Preview plugins are not preinstalled and must be installed from an external source: Name Plugin Version Installation Details Ansible Automation Platform Frontend @ansible/plugin-backstage-rhaap 1.0.0 Learn more Ansible Automation Platform @ansible/plugin-backstage-rhaap-backend 1.0.0 Learn more Ansible Automation Platform Scaffolder Backend @ansible/plugin-scaffolder-backend-module-backstage-rhaap 1.0.0 Learn more Note The above Red Hat Ansible Automation Platform (RHAAP) plugins, can be used as a replacement for the older plugin listed in the Technology Preview plugins section of the Configuring plugins in Red Hat Developer Hub guide .
null
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.4/html/dynamic_plugins_reference/rhdh-compatible-plugins
Chapter 1. Setting up the Apache HTTP web server
Chapter 1. Setting up the Apache HTTP web server 1.1. Introduction to the Apache HTTP web server A web server is a network service that serves content to a client over the web. This typically means web pages, but any other documents can be served as well. Web servers are also known as HTTP servers, as they use the hypertext transport protocol ( HTTP ). The Apache HTTP Server , httpd , is an open source web server developed by the Apache Software Foundation . If you are upgrading from a release of Red Hat Enterprise Linux, you have to update the httpd service configuration accordingly. This section reviews some of the newly added features, and guides you through the update of prior configuration files. 1.2. Notable changes in the Apache HTTP Server RHEL 9 provides version 2.4.62 of the Apache HTTP Server. Notable changes over version 2.4.37 distributed with RHEL 8 include: Apache HTTP Server Control Interface ( apachectl ): The systemctl pager is now disabled for apachectl status output. The apachectl command now fails instead of giving a warning if you pass additional arguments. The apachectl graceful-stop command now returns immediately. The apachectl configtest command now executes the httpd -t command without changing the SELinux context. The apachectl(8) man page in RHEL now fully documents differences from upstream apachectl . Apache eXtenSion tool ( apxs ): The /usr/bin/apxs command no longer uses or exposes compiler optimisation flags as applied when building the httpd package. You can now use the /usr/lib64/httpd/build/vendor-apxs command to apply the same compiler flags as used to build httpd . To use the vendor-apxs command, you must install the redhat-rpm-config package first. Apache modules: The mod_lua module is now provided in a separate package. The mod_php module provided with PHP for use with the Apache HTTP Server has been removed. Since RHEL 8, PHP scripts are run using the FastCGI Process Manager ( php-fpm ) by default. For more information, see Using PHP with the Apache HTTP Server . Configuration syntax changes: In the deprecated Allow directive provided by the mod_access_compat module, a comment (the # character) now triggers a syntax error instead of being silently ignored. Other changes: Kernel thread IDs are now used directly in error log messages, making them both accurate and more concise. Many minor enhancements and bug fixes. Several new interfaces are available to module authors. There are no backwards-incompatible changes to the httpd module API since RHEL 8. Apache HTTP Server 2.4 is the initial version of this Application Stream, which you can install easily as an RPM package. 1.3. The Apache configuration files The httpd , by default, reads the configuration files after start. You can see the list of the locations of configuration files in the table below. Table 1.1. The httpd service configuration files Path Description /etc/httpd/conf/httpd.conf The main configuration file. /etc/httpd/conf.d/ An auxiliary directory for configuration files that are included in the main configuration file. /etc/httpd/conf.modules.d/ An auxiliary directory for configuration files which load installed dynamic modules packaged in Red Hat Enterprise Linux. In the default configuration, these configuration files are processed first. Although the default configuration is suitable for most situations, you can use also other configuration options. For any changes to take effect, restart the web server first. To check the configuration for possible errors, type the following at a shell prompt: To make the recovery from mistakes easier, make a copy of the original file before editing it. 1.4. Managing the httpd service This section describes how to start, stop, and restart the httpd service. Prerequisites The Apache HTTP Server is installed. Procedure To start the httpd service, enter: To stop the httpd service, enter: To restart the httpd service, enter: 1.5. Setting up a single-instance Apache HTTP Server You can set up a single-instance Apache HTTP Server to serve static HTML content. Follow the procedure if the web server should provide the same content for all domains associated with the server. If you want to provide different content for different domains, set up name-based virtual hosts. For details, see Configuring Apache name-based virtual hosts . Procedure Install the httpd package: If you use firewalld , open the TCP port 80 in the local firewall: Enable and start the httpd service: Optional: Add HTML files to the /var/www/html/ directory. Note When adding content to /var/www/html/ , files and directories must be readable by the user under which httpd runs by default. The content owner can be the either the root user and root user group, or another user or group of the administrator's choice. If the content owner is the root user and root user group, the files must be readable by other users. The SELinux context for all the files and directories must be httpd_sys_content_t , which is applied by default to all content within the /var/www directory. Verification Connect with a web browser to http:// server_IP_or_host_name / . If the /var/www/html/ directory is empty or does not contain an index.html or index.htm file, Apache displays the Red Hat Enterprise Linux Test Page . If /var/www/html/ contains HTML files with a different name, you can load them by entering the URL to that file, such as http:// server_IP_or_host_name / example.html . Additional resources Apache manual: Installing the Apache HTTP server manual . See the httpd.service(8) man page on your system. 1.6. Configuring Apache name-based virtual hosts Name-based virtual hosts enable Apache to serve different content for different domains that resolve to the IP address of the server. You can set up a virtual host for both the example.com and example.net domain with separate document root directories. Both virtual hosts serve static HTML content. Prerequisites Clients and the web server resolve the example.com and example.net domain to the IP address of the web server. Note that you must manually add these entries to your DNS server. Procedure Install the httpd package: Edit the /etc/httpd/conf/httpd.conf file: Append the following virtual host configuration for the example.com domain: These settings configure the following: All settings in the <VirtualHost *:80> directive are specific for this virtual host. DocumentRoot sets the path to the web content of the virtual host. ServerName sets the domains for which this virtual host serves content. To set multiple domains, add the ServerAlias parameter to the configuration and specify the additional domains separated with a space in this parameter. CustomLog sets the path to the access log of the virtual host. ErrorLog sets the path to the error log of the virtual host. Note Apache uses the first virtual host found in the configuration also for requests that do not match any domain set in the ServerName and ServerAlias parameters. This also includes requests sent to the IP address of the server. Append a similar virtual host configuration for the example.net domain: Create the document roots for both virtual hosts: If you set paths in the DocumentRoot parameters that are not within /var/www/ , set the httpd_sys_content_t context on both document roots: These commands set the httpd_sys_content_t context on the /srv/example.com/ and /srv/example.net/ directory. Note that you must install the policycoreutils-python-utils package to run the restorecon command. If you use firewalld , open port 80 in the local firewall: Enable and start the httpd service: Verification Create a different example file in each virtual host's document root: Use a browser and connect to http://example.com . The web server shows the example file from the example.com virtual host. Use a browser and connect to http://example.net . The web server shows the example file from the example.net virtual host. Additional resources Installing the Apache HTTP Server manual - Virtual Hosts 1.7. Configuring Kerberos authentication for the Apache HTTP web server To perform Kerberos authentication in the Apache HTTP web server, RHEL 9 uses the mod_auth_gssapi Apache module. The Generic Security Services API ( GSSAPI ) is an interface for applications that make requests to use security libraries, such as Kerberos. The gssproxy service allows to implement privilege separation for the httpd server, which optimizes this process from the security point of view. Note The mod_auth_gssapi module replaces the removed mod_auth_kerb module. Prerequisites The httpd , mod_auth_gssapi and gssproxy packages are installed. The Apache web server is set up and the httpd service is running. 1.7.1. Setting up GSS-Proxy in an IdM environment This procedure describes how to set up GSS-Proxy to perform Kerberos authentication in the Apache HTTP web server. Procedure Enable access to the keytab file of HTTP/<SERVER_NAME>@realm principal by creating the service principal: Retrieve the keytab for the principal stored in the /etc/gssproxy/http.keytab file: This step sets permissions to 400, thus only the root user has access to the keytab file. The apache user does not. Create the /etc/gssproxy/80-httpd.conf file with the following content: Restart and enable the gssproxy service: Additional resources gssproxy(8) man pages on your system gssproxy-mech(8) man pages on your system gssproxy.conf(5) man pages on your system 1.7.2. Configuring Kerberos authentication for a directory shared by the Apache HTTP web server This procedure describes how to configure Kerberos authentication for the /var/www/html/private/ directory. Prerequisites The gssproxy service is configured and running. Procedure Configure the mod_auth_gssapi module to protect the /var/www/html/private/ directory: Create system unit configuration drop-in file: Add the following parameter to the system drop-in file: Reload the systemd configuration: Restart the httpd service: Verification Obtain a Kerberos ticket: Open the URL to the protected directory in a browser. 1.8. Configuring TLS encryption on an Apache HTTP Server By default, Apache provides content to clients using an unencrypted HTTP connection. This section describes how to enable TLS encryption and configure frequently used encryption-related settings on an Apache HTTP Server. Prerequisites The Apache HTTP Server is installed and running. 1.8.1. Adding TLS encryption to an Apache HTTP Server You can enable TLS encryption on an Apache HTTP Server for the example.com domain. Prerequisites The Apache HTTP Server is installed and running. The private key is stored in the /etc/pki/tls/private/example.com.key file. For details about creating a private key and certificate signing request (CSR), as well as how to request a certificate from a certificate authority (CA), see your CA's documentation. Alternatively, if your CA supports the ACME protocol, you can use the mod_md module to automate retrieving and provisioning TLS certificates. The TLS certificate is stored in the /etc/pki/tls/certs/example.com.crt file. If you use a different path, adapt the corresponding steps of the procedure. The CA certificate is stored in the /etc/pki/tls/certs/ca.crt file. If you use a different path, adapt the corresponding steps of the procedure. Clients and the web server resolve the host name of the server to the IP address of the web server. If the server runs RHEL 9.2 or later and the FIPS mode is enabled, clients must either support the Extended Master Secret (EMS) extension or use TLS 1.3. TLS 1.2 connections without EMS fail. For more information, see the Red Hat Knowledgebase solution TLS extension "Extended Master Secret" enforced . Procedure Install the mod_ssl package: Edit the /etc/httpd/conf.d/ssl.conf file and add the following settings to the <VirtualHost _default_:443> directive: Set the server name: Important The server name must match the entry set in the Common Name field of the certificate. Optional: If the certificate contains additional host names in the Subject Alt Names (SAN) field, you can configure mod_ssl to provide TLS encryption also for these host names. To configure this, add the ServerAliases parameter with corresponding names: Set the paths to the private key, the server certificate, and the CA certificate: For security reasons, configure that only the root user can access the private key file: Warning If the private key was accessed by unauthorized users, revoke the certificate, create a new private key, and request a new certificate. Otherwise, the TLS connection is no longer secure. If you use firewalld , open port 443 in the local firewall: Restart the httpd service: Note If you protected the private key file with a password, you must enter this password each time when the httpd service starts. Verification Use a browser and connect to https:// example.com . Additional resources SSL/TLS Encryption Security considerations for TLS in RHEL 9 1.8.2. Setting the supported TLS protocol versions on an Apache HTTP Server By default, the Apache HTTP Server on RHEL uses the system-wide crypto policy that defines safe default values, which are also compatible with recent browsers. For example, the DEFAULT policy defines that only the TLSv1.2 and TLSv1.3 protocol versions are enabled in apache. You can manually configure which TLS protocol versions your Apache HTTP Server supports. Follow the procedure if your environment requires to enable only specific TLS protocol versions, for example: If your environment requires that clients can also use the weak TLS1 (TLSv1.0) or TLS1.1 protocol. If you want to configure that Apache only supports the TLSv1.2 or TLSv1.3 protocol. Prerequisites TLS encryption is enabled on the server as described in Adding TLS encryption to an Apache HTTP server . If the server runs RHEL 9.2 or later and the FIPS mode is enabled, clients must either support the Extended Master Secret (EMS) extension or use TLS 1.3. TLS 1.2 connections without EMS fail. For more information, see the Red Hat Knowledgebase solution TLS extension "Extended Master Secret" enforced . Procedure Edit the /etc/httpd/conf/httpd.conf file, and add the following setting to the <VirtualHost> directive for which you want to set the TLS protocol version. For example, to enable only the TLSv1.3 protocol: Restart the httpd service: Verification Use the following command to verify that the server supports TLSv1.3 : Use the following command to verify that the server does not support TLSv1.2 : If the server does not support the protocol, the command returns an error: Optional: Repeat the command for other TLS protocol versions. Additional resources update-crypto-policies(8) man page on your system Using system-wide cryptographic policies . For further details about the SSLProtocol parameter, refer to the mod_ssl documentation in the Apache manual: Installing the Apache HTTP server manual . 1.8.3. Setting the supported ciphers on an Apache HTTP Server By default, the Apache HTTP Server uses the system-wide crypto policy that defines safe default values, which are also compatible with recent browsers. For the list of ciphers the system-wide crypto allows, see the /etc/crypto-policies/back-ends/openssl.config file. You can manually configure which ciphers your Apache HTTP Server supports. Follow the procedure if your environment requires specific ciphers. Prerequisites TLS encryption is enabled on the server as described in Adding TLS encryption to an Apache HTTP server . Procedure Edit the /etc/httpd/conf/httpd.conf file, and add the SSLCipherSuite parameter to the <VirtualHost> directive for which you want to set the TLS ciphers: This example enables only the EECDH+AESGCM , EDH+AESGCM , AES256+EECDH , and AES256+EDH ciphers and disables all ciphers which use the SHA1 and SHA256 message authentication code (MAC). Restart the httpd service: Verification To display the list of ciphers the Apache HTTP Server supports: Install the nmap package: Use the nmap utility to display the supported ciphers: Additional resources update-crypto-policies(8) man page on your system Using system-wide cryptographic policies . SSLCipherSuite 1.9. Configuring TLS client certificate authentication Client certificate authentication enables administrators to allow only users who authenticate using a certificate to access resources on the web server. You can configure client certificate authentication for the /var/www/html/Example/ directory. If the Apache HTTP Server uses the TLS 1.3 protocol, certain clients require additional configuration. For example, in Firefox, set the security.tls.enable_post_handshake_auth parameter in the about:config menu to true . For further details, see Transport Layer Security version 1.3 in Red Hat Enterprise Linux 8 . Prerequisites TLS encryption is enabled on the server as described in Adding TLS encryption to an Apache HTTP server . Procedure Edit the /etc/httpd/conf/httpd.conf file and add the following settings to the <VirtualHost> directive for which you want to configure client authentication: The SSLVerifyClient require setting defines that the server must successfully validate the client certificate before the client can access the content in the /var/www/html/Example/ directory. Restart the httpd service: Verification Use the curl utility to access the https://example.com/Example/ URL without client authentication: The error indicates that the web server requires a client certificate authentication. Pass the client private key and certificate, as well as the CA certificate to curl to access the same URL with client authentication: If the request succeeds, curl displays the index.html file stored in the /var/www/html/Example/ directory. Additional resources mod_ssl configuration 1.10. Securing web applications on a web server using ModSecurity ModSecurity is an open source web application firewall (WAF) supported by various web servers such as Apache, Nginx, and IIS, which reduces security risks in web applications. ModSecurity provides customizable rule sets for configuring your server. The mod_security-crs package contains the core rule set (CRS) with rules against cross-website scripting, bad user agents, SQL injection, Trojans, session hijacking, and other exploits. 1.10.1. Deploying the ModSecurity web-based application firewall for Apache To reduce risks related to running web-based applications on your web server by deploying ModSecurity, install the mod_security and mod_security_crs packages for the Apache HTTP server. The mod_security_crs package provides the core rule set (CRS) for the ModSecurity web-based application firewall (WAF) module. Procedure Install the mod_security , mod_security_crs , and httpd packages: Start the httpd server: Verification Verify that the ModSecurity web-based application firewall is enabled on your Apache HTTP server: Check that the /etc/httpd/modsecurity.d/activated_rules/ directory contains rules provided by mod_security_crs : Additional resources Red Hat JBoss Core Services ModSecurity Guide An introduction to web application firewalls for Linux sysadmins 1.10.2. Adding a custom rule to ModSecurity If the rules contained in the ModSecurity core rule set (CRS) do not fit your scenario and if you want to prevent additional possible attacks, you can add your custom rules to the rule set used by the ModSecurity web-based application firewall. The following example demonstrates the addition of a simple rule. For creating more complex rules, see the reference manual on the ModSecurity Wiki website. Prerequisites ModSecurity for Apache is installed and enabled. Procedure Open the /etc/httpd/conf.d/mod_security.conf file in a text editor of your choice, for example: Add the following example rule after the line starting with SecRuleEngine On : The rule forbids the use of resources to the user if the data parameter contains the evil string. Save the changes, and quit the editor. Restart the httpd server: Verification Create a test .html page: Restart the httpd server: Request test.html without malicious data in the GET variable of the HTTP request: Request test.html with malicious data in the GET variable of the HTTP request: Check the /var/log/httpd/error_log file, and locate the log entry about denying access with the param data containing an evil data message: Additional resources ModSecurity Wiki 1.11. Installing the Apache HTTP Server manual You can install the Apache HTTP Server manual. This manual provides a detailed documentation of, for example: Configuration parameters and directives Performance tuning Authentication settings Modules Content caching Security tips Configuring TLS encryption After installing the manual, you can display it using a web browser. Prerequisites The Apache HTTP Server is installed and running. Procedure Install the httpd-manual package: Optional: By default, all clients connecting to the Apache HTTP Server can display the manual. To restrict access to a specific IP range, such as the 192.0.2.0/24 subnet, edit the /etc/httpd/conf.d/manual.conf file and add the Require ip 192.0.2.0/24 setting to the <Directory "/usr/share/httpd/manual"> directive: Restart the httpd service: Verification To display the Apache HTTP Server manual, connect with a web browser to http:// host_name_or_IP_address /manual/ 1.12. Working with Apache modules The httpd service is a modular application, and you can extend it with a number of Dynamic Shared Objects ( DSO s). Dynamic Shared Objects are modules that you can dynamically load or unload at runtime as necessary. You can find these modules in the /usr/lib64/httpd/modules/ directory. 1.12.1. Loading a DSO module As an administrator, you can choose the functionality to include in the server by configuring which modules the server should load. To load a particular DSO module, use the LoadModule directive. Note that modules provided by a separate package often have their own configuration file in the /etc/httpd/conf.modules.d/ directory. Prerequisites You have installed the httpd package. Procedure Search for the module name in the configuration files in the /etc/httpd/conf.modules.d/ directory: Edit the configuration file in which the module name was found, and uncomment the LoadModule directive of the module: If the module was not found, for example, because a RHEL package does not provide the module, create a configuration file, such as /etc/httpd/conf.modules.d/30-example.conf with the following directive: Restart the httpd service: 1.12.2. Compiling a custom Apache module You can create your own module and build it with the help of the httpd-devel package, which contains the include files, the header files, and the APache eXtenSion ( apxs ) utility required to compile a module. Prerequisites You have the httpd-devel package installed. Procedure Build a custom module with the following command: Verification Load the module the same way as described in Loading a DSO module . 1.13. Exporting a private key and certificates from an NSS database to use them in an Apache web server configuration Since RHEL 8 we no longer provide the mod_nss module for the Apache web server, and Red Hat recommends using the mod_ssl module. If you store your private key and certificates in a Network Security Services (NSS) database, follow this procedure to extract the key and certificates in Privacy Enhanced Mail (PEM) format . 1.14. Additional resources httpd(8) , httpd.service(8) , httpd.conf(5) , and apachectl(8) man pages on your system Kerberos authentication on an Apache HTTP server: Using GSS-Proxy for Apache httpd operation . Using Kerberos is an alternative way to enforce client authorization on an Apache HTTP Server. Configuring applications to use cryptographic hardware through PKCS #11 .
[ "apachectl configtest Syntax OK", "systemctl start httpd", "systemctl stop httpd", "systemctl restart httpd", "dnf install httpd", "firewall-cmd --permanent --add-port=80/tcp firewall-cmd --reload", "systemctl enable --now httpd", "dnf install httpd", "<VirtualHost *:80> DocumentRoot \"/var/www/example.com/\" ServerName example.com CustomLog /var/log/httpd/example.com_access.log combined ErrorLog /var/log/httpd/example.com_error.log </VirtualHost>", "<VirtualHost *:80> DocumentRoot \"/var/www/example.net/\" ServerName example.net CustomLog /var/log/httpd/example.net_access.log combined ErrorLog /var/log/httpd/example.net_error.log </VirtualHost>", "mkdir /var/www/example.com/ mkdir /var/www/example.net/", "semanage fcontext -a -t httpd_sys_content_t \"/srv/example.com(/.*)?\" restorecon -Rv /srv/example.com/ semanage fcontext -a -t httpd_sys_content_t \"/srv/example.net(/.\\*)?\" restorecon -Rv /srv/example.net/", "firewall-cmd --permanent --add-port=80/tcp firewall-cmd --reload", "systemctl enable --now httpd", "echo \"vHost example.com\" > /var/www/example.com/index.html echo \"vHost example.net\" > /var/www/example.net/index.html", "ipa service-add HTTP/<SERVER_NAME>", "ipa-getkeytab -s USD(awk '/^server =/ {print USD3}' /etc/ipa/default.conf) -k /etc/gssproxy/http.keytab -p HTTP/USD(hostname -f)", "[service/HTTP] mechs = krb5 cred_store = keytab:/etc/gssproxy/http.keytab cred_store = ccache:/var/lib/gssproxy/clients/krb5cc_%U euid = apache", "systemctl restart gssproxy.service systemctl enable gssproxy.service", "<Location /var/www/html/private> AuthType GSSAPI AuthName \"GSSAPI Login\" Require valid-user </Location>", "systemctl edit httpd.service", "[Service] Environment=GSS_USE_PROXY=1", "systemctl daemon-reload", "systemctl restart httpd.service", "kinit", "dnf install mod_ssl", "ServerName example.com", "ServerAlias www.example.com server.example.com", "SSLCertificateKeyFile \"/etc/pki/tls/private/example.com.key\" SSLCertificateFile \"/etc/pki/tls/certs/example.com.crt\" SSLCACertificateFile \"/etc/pki/tls/certs/ca.crt\"", "chown root:root /etc/pki/tls/private/example.com.key chmod 600 /etc/pki/tls/private/example.com.key", "firewall-cmd --permanent --add-port=443/tcp firewall-cmd --reload", "systemctl restart httpd", "SSLProtocol -All TLSv1.3", "systemctl restart httpd", "openssl s_client -connect example.com :443 -tls1_3", "openssl s_client -connect example.com :443 -tls1_2", "140111600609088:error:1409442E:SSL routines:ssl3_read_bytes:tlsv1 alert protocol version:ssl/record/rec_layer_s3.c:1543:SSL alert number 70", "SSLCipherSuite \"EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH:!SHA1:!SHA256\"", "systemctl restart httpd", "dnf install nmap", "nmap --script ssl-enum-ciphers -p 443 example.com PORT STATE SERVICE 443/tcp open https | ssl-enum-ciphers: | TLSv1.2: | ciphers: | TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (ecdh_x25519) - A | TLS_DHE_RSA_WITH_AES_256_GCM_SHA384 (dh 2048) - A | TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 (ecdh_x25519) - A", "<Directory \"/var/www/html/Example/\"> SSLVerifyClient require </Directory>", "systemctl restart httpd", "curl https://example.com/Example/ curl: (56) OpenSSL SSL_read: error:1409445C:SSL routines:ssl3_read_bytes:tlsv13 alert certificate required, errno 0", "curl --cacert ca.crt --key client.key --cert client.crt https://example.com/Example/", "dnf install -y mod_security mod_security_crs httpd", "systemctl restart httpd", "httpd -M | grep security security2_module (shared)", "ls /etc/httpd/modsecurity.d/activated_rules/ REQUEST-921-PROTOCOL-ATTACK.conf REQUEST-930-APPLICATION-ATTACK-LFI.conf", "vi /etc/httpd/conf.d/mod_security.conf", "SecRule ARGS:data \"@contains evil\" \"deny,status:403,msg:'param data contains evil data',id:1\"", "systemctl restart httpd", "echo \"mod_security test\" > /var/www/html/ test .html", "systemctl restart httpd", "curl http://localhost/test.html?data=good mod_security test", "curl localhost/test.html?data=xxxevilxxx <!DOCTYPE HTML PUBLIC \"-//IETF//DTD HTML 2.0//EN\"> <html><head> <title>403 Forbidden</title> </head><body> <h1>Forbidden</h1> <p>You do not have permission to access this resource.</p> </body></html>", "[Wed May 25 08:01:31.036297 2022] [:error] [pid 5839:tid 139874434791168] [client ::1:45658] [client ::1] ModSecurity: Access denied with code 403 (phase 2). String match \"evil\" at ARGS:data. [file \"/etc/httpd/conf.d/mod_security.conf\"] [line \"4\"] [id \"1\"] [msg \"param data contains evil data\"] [hostname \"localhost\"] [uri \"/test.html\"] [unique_id \"Yo4amwIdsBG3yZqSzh2GuwAAAIY\"]", "dnf install httpd-manual", "<Directory \"/usr/share/httpd/manual\"> Require ip 192.0.2.0/24 </Directory>", "systemctl restart httpd", "grep mod_ssl.so /etc/httpd/conf.modules.d/ *", "LoadModule ssl_module modules/mod_ssl.so", "LoadModule ssl_module modules/<custom_module>.so", "systemctl restart httpd", "apxs -i -a -c module_name.c" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/deploying_web_servers_and_reverse_proxies/setting-apache-http-server_deploying-web-servers-and-reverse-proxies
Chapter 20. Configuring PTP Using ptp4l
Chapter 20. Configuring PTP Using ptp4l 20.1. Introduction to PTP The Precision Time Protocol ( PTP ) is a protocol used to synchronize clocks in a network. When used in conjunction with hardware support, PTP is capable of sub-microsecond accuracy, which is far better than is normally obtainable with NTP . PTP support is divided between the kernel and user space. The kernel in Red Hat Enterprise Linux includes support for PTP clocks, which are provided by network drivers. The actual implementation of the protocol is known as linuxptp , a PTPv2 implementation according to the IEEE standard 1588 for Linux. The linuxptp package includes the ptp4l and phc2sys programs for clock synchronization. The ptp4l program implements the PTP boundary clock and ordinary clock. With hardware time stamping, it is used to synchronize the PTP hardware clock to the master clock, and with software time stamping it synchronizes the system clock to the master clock. The phc2sys program is needed only with hardware time stamping, for synchronizing the system clock to the PTP hardware clock on the network interface card ( NIC ). 20.1.1. Understanding PTP The clocks synchronized by PTP are organized in a master-slave hierarchy. The slaves are synchronized to their masters which may be slaves to their own masters. The hierarchy is created and updated automatically by the best master clock ( BMC ) algorithm, which runs on every clock. When a clock has only one port, it can be master or slave , such a clock is called an ordinary clock ( OC ). A clock with multiple ports can be master on one port and slave on another, such a clock is called a boundary clock ( BC ). The top-level master is called the grandmaster clock , which can be synchronized by using a Global Positioning System ( GPS ) time source. By using a GPS-based time source, disparate networks can be synchronized with a high-degree of accuracy. Figure 20.1. PTP grandmaster, boundary, and slave Clocks 20.1.2. Advantages of PTP One of the main advantages that PTP has over the Network Time Protocol ( NTP ) is hardware support present in various network interface controllers ( NIC ) and network switches. This specialized hardware allows PTP to account for delays in message transfer, and greatly improves the accuracy of time synchronization. While it is possible to use non-PTP enabled hardware components within the network, this will often cause an increase in jitter or introduce an asymmetry in the delay resulting in synchronization inaccuracies, which add up with multiple non-PTP aware components used in the communication path. To achieve the best possible accuracy, it is recommended that all networking components between PTP clocks are PTP hardware enabled. Time synchronization in larger networks where not all of the networking hardware supports PTP might be better suited for NTP . With hardware PTP support, the NIC has its own on-board clock, which is used to time stamp the received and transmitted PTP messages. It is this on-board clock that is synchronized to the PTP master, and the computer's system clock is synchronized to the PTP hardware clock on the NIC. With software PTP support, the system clock is used to time stamp the PTP messages and it is synchronized to the PTP master directly. Hardware PTP support provides better accuracy since the NIC can time stamp the PTP packets at the exact moment they are sent and received while software PTP support requires additional processing of the PTP packets by the operating system. 20.2. Using PTP In order to use PTP , the kernel network driver for the intended interface has to support either software or hardware time stamping capabilities. 20.2.1. Checking for Driver and Hardware Support In addition to hardware time stamping support being present in the driver, the NIC must also be capable of supporting this functionality in the physical hardware. The best way to verify the time stamping capabilities of a particular driver and NIC is to use the ethtool utility to query the interface. In this example, eth3 is the interface you want to check: Note The PTP Hardware Clock value printed by ethtool is the index of the PTP hardware clock. It corresponds to the naming of the /dev/ptp* devices. The first PHC has an index of 0. For software time stamping support, the parameters list should include: SOF_TIMESTAMPING_SOFTWARE SOF_TIMESTAMPING_TX_SOFTWARE SOF_TIMESTAMPING_RX_SOFTWARE For hardware time stamping support, the parameters list should include: SOF_TIMESTAMPING_RAW_HARDWARE SOF_TIMESTAMPING_TX_HARDWARE SOF_TIMESTAMPING_RX_HARDWARE 20.2.2. Installing PTP The kernel in Red Hat Enterprise Linux includes support for PTP . User space support is provided by the tools in the linuxptp package. To install linuxptp , issue the following command as root : This will install ptp4l and phc2sys . Do not run more than one service to set the system clock's time at the same time. If you intend to serve PTP time using NTP , see Section 20.8, "Serving PTP Time with NTP" . 20.2.3. Starting ptp4l The ptp4l program can be started from the command line or it can be started as a service. When running as a service, options are specified in the /etc/sysconfig/ptp4l file. Options required for use both by the service and on the command line should be specified in the /etc/ptp4l.conf file. The /etc/sysconfig/ptp4l file includes the -f /etc/ptp4l.conf command line option, which causes the ptp4l program to read the /etc/ptp4l.conf file and process the options it contains. The use of the /etc/ptp4l.conf is explained in Section 20.4, "Specifying a Configuration File" . More information on the different ptp4l options and the configuration file settings can be found in the ptp4l(8) man page. Starting ptp4l as a Service To start ptp4l as a service, issue the following command as root : For more information on managing system services in Red Hat Enterprise Linux 7, see Chapter 10, Managing Services with systemd . Using ptp4l From The Command Line The ptp4l program tries to use hardware time stamping by default. To use ptp4l with hardware time stamping capable drivers and NICs, you must provide the network interface to use with the -i option. Enter the following command as root : Where eth3 is the interface you want to configure. Below is example output from ptp4l when the PTP clock on the NIC is synchronized to a master: The master offset value is the measured offset from the master in nanoseconds. The s0 , s1 , s2 strings indicate the different clock servo states: s0 is unlocked, s1 is clock step and s2 is locked. Once the servo is in the locked state ( s2 ), the clock will not be stepped (only slowly adjusted) unless the pi_offset_const option is set to a positive value in the configuration file (described in the ptp4l(8) man page). The adj value is the frequency adjustment of the clock in parts per billion (ppb). The path delay value is the estimated delay of the synchronization messages sent from the master in nanoseconds. Port 0 is a Unix domain socket used for local PTP management. Port 1 is the eth3 interface (based on the example above.) INITIALIZING, LISTENING, UNCALIBRATED and SLAVE are some of possible port states which change on the INITIALIZE, RS_SLAVE, MASTER_CLOCK_SELECTED events. In the last state change message, the port state changed from UNCALIBRATED to SLAVE indicating successful synchronization with a PTP master clock. Logging Messages From ptp4l By default, messages are sent to /var/log/messages . However, specifying the -m option enables logging to standard output which can be useful for debugging purposes. To enable software time stamping, the -S option needs to be used as follows: 20.2.3.1. Selecting a Delay Measurement Mechanism There are two different delay measurement mechanisms and they can be selected by means of an option added to the ptp4l command as follows: -P The -P selects the peer-to-peer ( P2P ) delay measurement mechanism. The P2P mechanism is preferred as it reacts to changes in the network topology faster, and may be more accurate in measuring the delay, than other mechanisms. The P2P mechanism can only be used in topologies where each port exchanges PTP messages with at most one other P2P port. It must be supported and used by all hardware, including transparent clocks, on the communication path. -E The -E selects the end-to-end ( E2E ) delay measurement mechanism. This is the default. The E2E mechanism is also referred to as the delay "request-response" mechanism. -A The -A enables automatic selection of the delay measurement mechanism. The automatic option starts ptp4l in E2E mode. It will change to P2P mode if a peer delay request is received. Note All clocks on a single PTP communication path must use the same mechanism to measure the delay. Warnings will be printed in the following circumstances: When a peer delay request is received on a port using the E2E mechanism. When a E2E delay request is received on a port using the P2P mechanism. 20.3. Using PTP with Multiple Interfaces When using PTP with multiple interfaces in different networks, it is necessary to change the reverse path forwarding mode to loose mode. Red Hat Enterprise Linux 7 defaults to using Strict Reverse Path Forwarding following the Strict Reverse Path recommendation from RFC 3704, Ingress Filtering for Multihomed Networks . See the Reverse Path Forwarding section in the Red Hat Enterprise Linux 7 Security Guide for more details. The sysctl utility is used to read and write values to tunables in the kernel. Changes to a running system can be made using sysctl commands directly on the command line and permanent changes can be made by adding lines to the /etc/sysctl.conf file. To change to loose mode filtering globally, enter the following commands as root : To change the reverse path filtering mode per network interface, use the net.ipv4. interface .rp_filter command on all PTP interfaces. For example, for an interface with device name em1 : To make these settings persistent across reboots, modify the /etc/sysctl.conf file. You can change the mode for all interfaces, or for a particular interface. To change the mode for all interfaces, open the /etc/sysctl.conf file with an editor running as the root user and add a line as follows: To change only certain interfaces, add multiple lines in the following format: Note When using the settings for all and particular interfaces as well, maximum value from conf/{all,interface}/rp_filter is used when doing source validation on each interface. You can also change the mode by using the default setting, which means that it applies only to the newly created interfaces. For more information on using the all , default , or a specific device settings in the sysctl parameters, see the Red Hat Knowledgebase article What is the difference between "all", "default" and a specific device in a sysctl parameter? . Note that you might experience issues of two types due to the timing of the sysctl service run during the boot process: Drivers are loaded before the sysctl service runs. In this case, affected network interfaces use the mode preset from the kernel, and sysctl defaults are ignored. For solution of this problem, see the Red Hat Knowledgebase article What is the difference between "all", "default" and a specific device in a sysctl parameter? . Drivers are loaded or reloaded after the sysctl service runs. In this case, it is possible that some sysctl.conf parameters are not used after reboot. These settings may not be available or they may return to defaults. For solution of this problem, see the Red Hat Knowledgebase article Some sysctl.conf parameters are not used after reboot, manually adjusting the settings works as expected . 20.4. Specifying a Configuration File The command line options and other options, which cannot be set on the command line, can be set in an optional configuration file. No configuration file is read by default, so it needs to be specified at runtime with the -f option. For example: A configuration file equivalent to the -i eth3 -m -S options shown above would look as follows: 20.5. Using the PTP Management Client The PTP management client, pmc , can be used to obtain additional information from ptp4l as follows: Setting the -b option to zero limits the boundary to the locally running ptp4l instance. A larger boundary value will retrieve the information also from PTP nodes further from the local clock. The retrievable information includes: stepsRemoved is the number of communication paths to the grandmaster clock. offsetFromMaster and master_offset is the last measured offset of the clock from the master in nanoseconds. meanPathDelay is the estimated delay of the synchronization messages sent from the master in nanoseconds. if gmPresent is true, the PTP clock is synchronized to a master, the local clock is not the grandmaster clock. gmIdentity is the grandmaster's identity. For a full list of pmc commands, type the following as root : Additional information is available in the pmc(8) man page. 20.6. Synchronizing the Clocks The phc2sys program is used to synchronize the system clock to the PTP hardware clock ( PHC ) on the NIC. The phc2sys service is configured in the /etc/sysconfig/phc2sys configuration file. The default setting in the /etc/sysconfig/phc2sys file is as follows: The -a option causes phc2sys to read the clocks to be synchronized from the ptp4l application. It will follow changes in the PTP port states, adjusting the synchronization between the NIC hardware clocks accordingly. The system clock is not synchronized, unless the -r option is also specified. If you want the system clock to be eligible to become a time source, specify the -r option twice. After making changes to /etc/sysconfig/phc2sys , restart the phc2sys service from the command line by issuing a command as root : Under normal circumstances, use systemctl commands to start, stop, and restart the phc2sys service. When you do not want to start phc2sys as a service, you can start it from the command line. For example, enter the following command as root : The -a option causes phc2sys to read the clocks to be synchronized from the ptp4l application. If you want the system clock to be eligible to become a time source, specify the -r option twice. Alternately, use the -s option to synchronize the system clock to a specific interface's PTP hardware clock. For example: The -w option waits for the running ptp4l application to synchronize the PTP clock and then retrieves the TAI to UTC offset from ptp4l . Normally, PTP operates in the International Atomic Time ( TAI ) timescale, while the system clock is kept in Coordinated Universal Time ( UTC ). The current offset between the TAI and UTC timescales is 36 seconds. The offset changes when leap seconds are inserted or deleted, which typically happens every few years. The -O option needs to be used to set this offset manually when the -w is not used, as follows: Once the phc2sys servo is in a locked state, the clock will not be stepped, unless the -S option is used. This means that the phc2sys program should be started after the ptp4l program has synchronized the PTP hardware clock. However, with -w , it is not necessary to start phc2sys after ptp4l as it will wait for it to synchronize the clock. The phc2sys program can also be started as a service by running: When running as a service, options are specified in the /etc/sysconfig/phc2sys file. More information on the different phc2sys options can be found in the phc2sys(8) man page. Note that the examples in this section assume the command is run on a slave system or slave port. 20.7. Verifying Time Synchronization When PTP time synchronization is working correctly, new messages with offsets and frequency adjustments are printed periodically to the ptp4l and phc2sys outputs if hardware time stamping is used. The output values converge shortly. You can see these messages in the /var/log/messages file. The following examples of the ptp4l and the phc2sys output contain: offset (in nanoseconds) frequency offset (in parts per billion (ppb)) path delay (in nanoseconds) Example of the ptp4l output: Example of the phc2sys output: To reduce the ptp4l output and print only the values, use the summary_interval directive. The summary_interval directive is specified as 2 to the power of n in seconds. For example, to reduce the output to every 1024 seconds, add the following line to the /etc/ptp4l.conf file: An example of the ptp4l output, with summary_interval set to 6: By default, summary_interval is set to 0, so messages are printed once per second, which is the maximum frequency. The messages are logged at the LOG_INFO level. To disable messages, use the -l option to set the maximum log level to 5 or lower: You can use the -u option to reduce the phc2sys output: Where summary-updates is the number of clock updates to include in summary statistics. An example follows: When used with these options, the interval for updating the statistics is set to 60 seconds ( -u ), phc2sys waits until ptp4l is in synchronized state ( -w ), and messages are printed to the standard output ( -m ). For further details about the phc2sys options, see the phc2sys(5) man page. The output includes: offset root mean square (rms) maximum absolute offset (max) frequency offset (freq): its mean, and standard deviation path delay (delay): its mean, and standard deviation 20.8. Serving PTP Time with NTP The ntpd daemon can be configured to distribute the time from the system clock synchronized by ptp4l or phc2sys by using the LOCAL reference clock driver. To prevent ntpd from adjusting the system clock, the ntp.conf file must not specify any NTP servers. The following is a minimal example of ntp.conf : Note When the DHCP client program, dhclient , receives a list of NTP servers from the DHCP server, it adds them to ntp.conf and restarts the service. To disable that feature, add PEERNTP=no to /etc/sysconfig/network . 20.9. Serving NTP Time with PTP NTP to PTP synchronization in the opposite direction is also possible. When ntpd is used to synchronize the system clock, ptp4l can be configured with the priority1 option (or other clock options included in the best master clock algorithm) to be the grandmaster clock and distribute the time from the system clock via PTP : With hardware time stamping, phc2sys needs to be used to synchronize the PTP hardware clock to the system clock. If running phc2sys as a service, edit the /etc/sysconfig/phc2sys configuration file. The default setting in the /etc/sysconfig/phc2sys file is as follows: As root , edit that line as follows: The -r option is used twice here to allow synchronization of the PTP hardware clock on the NIC from the system clock. Restart the phc2sys service for the changes to take effect: To prevent quick changes in the PTP clock's frequency, the synchronization to the system clock can be loosened by using smaller P (proportional) and I (integral) constants for the PI servo: 20.10. Synchronize to PTP or NTP Time Using timemaster When there are multiple PTP domains available on the network, or fallback to NTP is needed, the timemaster program can be used to synchronize the system clock to all available time sources. The PTP time is provided by phc2sys and ptp4l via shared memory driver ( SHM reference clocks to chronyd or ntpd (depending on the NTP daemon that has been configured on the system). The NTP daemon can then compare all time sources, both PTP and NTP , and use the best sources to synchronize the system clock. On start, timemaster reads a configuration file that specifies the NTP and PTP time sources, checks which network interfaces have their own or share a PTP hardware clock (PHC), generates configuration files for ptp4l and chronyd or ntpd , and starts the ptp4l , phc2sys , and chronyd or ntpd processes as needed. It will remove the generated configuration files on exit. It writes configuration files for chronyd , ntpd , and ptp4l to /var/run/timemaster/ . 20.10.1. Starting timemaster as a Service To start timemaster as a service, issue the following command as root : This will read the options in /etc/timemaster.conf . For more information on managing system services in Red Hat Enterprise Linux 7, see Chapter 10, Managing Services with systemd . 20.10.2. Understanding the timemaster Configuration File Red Hat Enterprise Linux provides a default /etc/timemaster.conf file with a number of sections containing default options. The section headings are enclosed in brackets. To view the default configuration, issue a command as follows: Notice the section named as follows: This is an example of an NTP server section, "ntp-server.local" is an example of a host name for an NTP server on the local LAN. Add more sections as required using a host name or IP address as part of the section name. Note that the short polling values in that example section are not suitable for a public server, see Chapter 19, Configuring NTP Using ntpd for an explanation of suitable minpoll and maxpoll values. Notice the section named as follows: A "PTP domain" is a group of one or more PTP clocks that synchronize to each other. They may or may not be synchronized to clocks in another domain. Clocks that are configured with the same domain number make up the domain. This includes a PTP grandmaster clock. The domain number in each "PTP domain" section needs to correspond to one of the PTP domains configured on the network. An instance of ptp4l is started for every interface which has its own PTP clock and hardware time stamping is enabled automatically. Interfaces that support hardware time stamping have a PTP clock (PHC) attached, however it is possible for a group of interfaces on a NIC to share a PHC. A separate ptp4l instance will be started for each group of interfaces sharing the same PHC and for each interface that supports only software time stamping. All ptp4l instances are configured to run as a slave. If an interface with hardware time stamping is specified in more than one PTP domain, then only the first ptp4l instance created will have hardware time stamping enabled. Notice the section named as follows: The default timemaster configuration includes the system ntpd and chrony configuration ( /etc/ntp.conf or /etc/chronyd.conf ) in order to include the configuration of access restrictions and authentication keys. That means any NTP servers specified there will be used with timemaster too. The section headings are as follows: [ntp_server ntp-server.local] - Specify polling intervals for this server. Create additional sections as required. Include the host name or IP address in the section heading. [ptp_domain 0] - Specify interfaces that have PTP clocks configured for this domain. Create additional sections with, the appropriate domain number, as required. [timemaster] - Specify the NTP daemon to be used. Possible values are chronyd and ntpd . [chrony.conf] - Specify any additional settings to be copied to the configuration file generated for chronyd . [ntp.conf] - Specify any additional settings to be copied to the configuration file generated for ntpd . [ptp4l.conf] - Specify options to be copied to the configuration file generated for ptp4l . [chronyd] - Specify any additional settings to be passed on the command line to chronyd . [ntpd] - Specify any additional settings to be passed on the command line to ntpd . [phc2sys] - Specify any additional settings to be passed on the command line to phc2sys . [ptp4l] - Specify any additional settings to be passed on the command line to all instances of ptp4l . The section headings and there contents are explained in detail in the timemaster(8) manual page. 20.10.3. Configuring timemaster Options Editing the timemaster Configuration File To change the default configuration, open the /etc/timemaster.conf file for editing as root : For each NTP server you want to control using timemaster , create [ntp_server address ] sections. Note that the short polling values in the example section are not suitable for a public server, see Chapter 19, Configuring NTP Using ntpd for an explanation of suitable minpoll and maxpoll values. To add interfaces that should be used in a domain, edit the #[ptp_domain 0] section and add the interfaces. Create additional domains as required. For example: If required to use ntpd as the NTP daemon on this system, change the default entry in the [timemaster] section from chronyd to ntpd . See Chapter 18, Configuring NTP Using the chrony Suite for information on the differences between ntpd and chronyd. If using chronyd as the NTP server on this system, add any additional options below the default include /etc/chrony.conf entry in the [chrony.conf] section. Edit the default include entry if the path to /etc/chrony.conf is known to have changed. If using ntpd as the NTP server on this system, add any additional options below the default include /etc/ntp.conf entry in the [ntp.conf] section. Edit the default include entry if the path to /etc/ntp.conf is known to have changed. In the [ptp4l.conf] section, add any options to be copied to the configuration file generated for ptp4l . This chapter documents common options and more information is available in the ptp4l(8) manual page. In the [chronyd] section, add any command line options to be passed to chronyd when called by timemaster . See Chapter 18, Configuring NTP Using the chrony Suite for information on using chronyd . In the [ntpd] section, add any command line options to be passed to ntpd when called by timemaster . See Chapter 19, Configuring NTP Using ntpd for information on using ntpd . In the [phc2sys] section, add any command line options to be passed to phc2sys when called by timemaster . This chapter documents common options and more information is available in the phy2sys(8) manual page. In the [ptp4l] section, add any command line options to be passed to ptp4l when called by timemaster . This chapter documents common options and more information is available in the ptp4l(8) manual page. Save the configuration file and restart timemaster by issuing the following command as root : 20.11. Improving Accuracy Previously, test results indicated that disabling the tickless kernel capability could significantly improve the stability of the system clock, and thus improve the PTP synchronization accuracy (at the cost of increased power consumption). The kernel tickless mode can be disabled by adding nohz=off to the kernel boot option parameters. However, recent improvements applied to kernel-3.10.0-197.el7 have greatly improved the stability of the system clock and the difference in stability of the clock with and without nohz=off should be much smaller now for most users. The ptp4l and phc2sys applications can be configured to use a new adaptive servo. The advantage over the PI servo is that it does not require configuration of the PI constants to perform well. To make use of this for ptp4l , add the following line to the /etc/ptp4l.conf file: After making changes to /etc/ptp4l.conf , restart the ptp4l service from the command line by issuing the following command as root : To make use of this for phc2sys , add the following line to the /etc/sysconfig/phc2sys file: After making changes to /etc/sysconfig/phc2sys , restart the phc2sys service from the command line by issuing the following command as root : 20.12. Additional Resources The following sources of information provide additional resources regarding PTP and the ptp4l tools. 20.12.1. Installed Documentation ptp4l(8) man page - Describes ptp4l options including the format of the configuration file. pmc(8) man page - Describes the PTP management client and its command options. phc2sys(8) man page - Describes a tool for synchronizing the system clock to a PTP hardware clock (PHC). timemaster(8) man page - Describes a program that uses ptp4l and phc2sys to synchronize the system clock using chronyd or ntpd . 20.12.2. Useful Websites http://www.nist.gov/el/isd/ieee/ieee1588.cfm The IEEE 1588 Standard.
[ "~]# ethtool -T eth3 Time stamping parameters for eth3: Capabilities: hardware-transmit (SOF_TIMESTAMPING_TX_HARDWARE) software-transmit (SOF_TIMESTAMPING_TX_SOFTWARE) hardware-receive (SOF_TIMESTAMPING_RX_HARDWARE) software-receive (SOF_TIMESTAMPING_RX_SOFTWARE) software-system-clock (SOF_TIMESTAMPING_SOFTWARE) hardware-raw-clock (SOF_TIMESTAMPING_RAW_HARDWARE) PTP Hardware Clock: 0 Hardware Transmit Timestamp Modes: off (HWTSTAMP_TX_OFF) on (HWTSTAMP_TX_ON) Hardware Receive Filter Modes: none (HWTSTAMP_FILTER_NONE) all (HWTSTAMP_FILTER_ALL)", "~]# yum install linuxptp", "~]# systemctl start ptp4l", "~]# ptp4l -i eth3 -m", "~]# ptp4l -i eth3 -m selected eth3 as PTP clock port 1: INITIALIZING to LISTENING on INITIALIZE port 0: INITIALIZING to LISTENING on INITIALIZE port 1: new foreign master 00a069.fffe.0b552d-1 selected best master clock 00a069.fffe.0b552d port 1: LISTENING to UNCALIBRATED on RS_SLAVE master offset -23947 s0 freq +0 path delay 11350 master offset -28867 s0 freq +0 path delay 11236 master offset -32801 s0 freq +0 path delay 10841 master offset -37203 s1 freq +0 path delay 10583 master offset -7275 s2 freq -30575 path delay 10583 port 1: UNCALIBRATED to SLAVE on MASTER_CLOCK_SELECTED master offset -4552 s2 freq -30035 path delay 10385", "~]# ptp4l -i eth3 -m -S", "~]# sysctl -w net.ipv4.conf.default.rp_filter=2 ~]# sysctl -w net.ipv4.conf.all.rp_filter=2", "~]# sysctl -w net.ipv4.conf.em1.rp_filter=2", "net.ipv4.conf.all.rp_filter=2", "net.ipv4.conf. interface .rp_filter=2", "~]# ptp4l -f /etc/ptp4l.conf", "~]# cat /etc/ptp4l.conf [global] verbose 1 time_stamping software [eth3]", "~]# pmc -u -b 0 'GET CURRENT_DATA_SET' sending: GET CURRENT_DATA_SET 90e2ba.fffe.20c7f8-0 seq 0 RESPONSE MANAGMENT CURRENT_DATA_SET stepsRemoved 1 offsetFromMaster -142.0 meanPathDelay 9310.0", "~]# pmc -u -b 0 'GET TIME_STATUS_NP' sending: GET TIME_STATUS_NP 90e2ba.fffe.20c7f8-0 seq 0 RESPONSE MANAGMENT TIME_STATUS_NP master_offset 310 ingress_time 1361545089345029441 cumulativeScaledRateOffset +1.000000000 scaledLastGmPhaseChange 0 gmTimeBaseIndicator 0 lastGmPhaseChange 0x0000'0000000000000000.0000 gmPresent true gmIdentity 00a069.fffe.0b552d", "~]# pmc help", "OPTIONS=\"-a -r\"", "~]# systemctl restart phc2sys", "~]# phc2sys -a -r", "~]# phc2sys -s eth3 -w", "~]# phc2sys -s eth3 -O -36", "~]# systemctl start phc2sys", "ptp4l[352.359]: selected /dev/ptp0 as PTP clock ptp4l[352.361]: port 1: INITIALIZING to LISTENING on INITIALIZE ptp4l[352.361]: port 0: INITIALIZING to LISTENING on INITIALIZE ptp4l[353.210]: port 1: new foreign master 00a069.fffe.0b552d-1 ptp4l[357.214]: selected best master clock 00a069.fffe.0b552d ptp4l[357.214]: port 1: LISTENING to UNCALIBRATED on RS_SLAVE ptp4l[359.224]: master offset 3304 s0 freq +0 path delay 9202 ptp4l[360.224]: master offset 3708 s1 freq -29492 path delay 9202 ptp4l[361.224]: master offset -3145 s2 freq -32637 path delay 9202 ptp4l[361.224]: port 1: UNCALIBRATED to SLAVE on MASTER_CLOCK_SELECTED ptp4l[362.223]: master offset -145 s2 freq -30580 path delay 9202 ptp4l[363.223]: master offset 1043 s2 freq -29436 path delay 8972 ptp4l[364.223]: master offset 266 s2 freq -29900 path delay 9153 ptp4l[365.223]: master offset 430 s2 freq -29656 path delay 9153 ptp4l[366.223]: master offset 615 s2 freq -29342 path delay 9169 ptp4l[367.222]: master offset -191 s2 freq -29964 path delay 9169 ptp4l[368.223]: master offset 466 s2 freq -29364 path delay 9170 ptp4l[369.235]: master offset 24 s2 freq -29666 path delay 9196 ptp4l[370.235]: master offset -375 s2 freq -30058 path delay 9238 ptp4l[371.235]: master offset 285 s2 freq -29511 path delay 9199 ptp4l[372.235]: master offset -78 s2 freq -29788 path delay 9204", "phc2sys[526.527]: Waiting for ptp4l phc2sys[527.528]: Waiting for ptp4l phc2sys[528.528]: phc offset 55341 s0 freq +0 delay 2729 phc2sys[529.528]: phc offset 54658 s1 freq -37690 delay 2725 phc2sys[530.528]: phc offset 888 s2 freq -36802 delay 2756 phc2sys[531.528]: phc offset 1156 s2 freq -36268 delay 2766 phc2sys[532.528]: phc offset 411 s2 freq -36666 delay 2738 phc2sys[533.528]: phc offset -73 s2 freq -37026 delay 2764 phc2sys[534.528]: phc offset 39 s2 freq -36936 delay 2746 phc2sys[535.529]: phc offset 95 s2 freq -36869 delay 2733 phc2sys[536.529]: phc offset -359 s2 freq -37294 delay 2738 phc2sys[537.529]: phc offset -257 s2 freq -37300 delay 2753 phc2sys[538.529]: phc offset 119 s2 freq -37001 delay 2745 phc2sys[539.529]: phc offset 288 s2 freq -36796 delay 2766 phc2sys[540.529]: phc offset -149 s2 freq -37147 delay 2760 phc2sys[541.529]: phc offset -352 s2 freq -37395 delay 2771 phc2sys[542.529]: phc offset 166 s2 freq -36982 delay 2748 phc2sys[543.529]: phc offset 50 s2 freq -37048 delay 2756 phc2sys[544.530]: phc offset -31 s2 freq -37114 delay 2748 phc2sys[545.530]: phc offset -333 s2 freq -37426 delay 2747 phc2sys[546.530]: phc offset 194 s2 freq -36999 delay 2749", "summary_interval 10", "ptp4l: [615.253] selected /dev/ptp0 as PTP clock ptp4l: [615.255] port 1: INITIALIZING to LISTENING on INITIALIZE ptp4l: [615.255] port 0: INITIALIZING to LISTENING on INITIALIZE ptp4l: [615.564] port 1: new foreign master 00a069.fffe.0b552d-1 ptp4l: [619.574] selected best master clock 00a069.fffe.0b552d ptp4l: [619.574] port 1: LISTENING to UNCALIBRATED on RS_SLAVE ptp4l: [623.573] port 1: UNCALIBRATED to SLAVE on MASTER_CLOCK_SELECTED ptp4l: [684.649] rms 669 max 3691 freq -29383 +/- 3735 delay 9232 +/- 122 ptp4l: [748.724] rms 253 max 588 freq -29787 +/- 221 delay 9219 +/- 158 ptp4l: [812.793] rms 287 max 673 freq -29802 +/- 248 delay 9211 +/- 183 ptp4l: [876.853] rms 226 max 534 freq -29795 +/- 197 delay 9221 +/- 138 ptp4l: [940.925] rms 250 max 562 freq -29801 +/- 218 delay 9199 +/- 148 ptp4l: [1004.988] rms 226 max 525 freq -29802 +/- 196 delay 9228 +/- 143 ptp4l: [1069.065] rms 300 max 646 freq -29802 +/- 259 delay 9214 +/- 176 ptp4l: [1133.125] rms 226 max 505 freq -29792 +/- 197 delay 9225 +/- 159 ptp4l: [1197.185] rms 244 max 688 freq -29790 +/- 211 delay 9201 +/- 162", "~]# phc2sys -l 5", "~]# phc2sys -u summary-updates", "~]# phc2sys -s eth3 -w -m -u 60 phc2sys[700.948]: rms 1837 max 10123 freq -36474 +/- 4752 delay 2752 +/- 16 phc2sys[760.954]: rms 194 max 457 freq -37084 +/- 174 delay 2753 +/- 12 phc2sys[820.963]: rms 211 max 487 freq -37085 +/- 185 delay 2750 +/- 19 phc2sys[880.968]: rms 183 max 440 freq -37102 +/- 164 delay 2734 +/- 91 phc2sys[940.973]: rms 244 max 584 freq -37095 +/- 216 delay 2748 +/- 16 phc2sys[1000.979]: rms 220 max 573 freq -36666 +/- 182 delay 2747 +/- 43 phc2sys[1060.984]: rms 266 max 675 freq -36759 +/- 234 delay 2753 +/- 17", "~]# cat /etc/ntp.conf server 127.127.1.0 fudge 127.127.1.0 stratum 0", "~]# cat /etc/ptp4l.conf [global] priority1 127 ptp4l -f /etc/ptp4l.conf", "OPTIONS=\"-a -r\"", "~]# vi /etc/sysconfig/phc2sys OPTIONS=\"-a -r -r\"", "~]# systemctl restart phc2sys", "~]# phc2sys -a -r -r -P 0.01 -I 0.0001", "~]# systemctl start timemaster", "~]USD less /etc/timemaster.conf Configuration file for timemaster #[ntp_server ntp-server.local] #minpoll 4 #maxpoll 4 #[ptp_domain 0] #interfaces eth0 [timemaster] ntp_program chronyd [chrony.conf] include /etc/chrony.conf [ntp.conf] includefile /etc/ntp.conf [ptp4l.conf] [chronyd] path /usr/sbin/chronyd options -u chrony [ntpd] path /usr/sbin/ntpd options -u ntp:ntp -g [phc2sys] path /usr/sbin/phc2sys [ptp4l] path /usr/sbin/ptp4l", "[ntp_server address ]", "[ptp_domain number ]", "[timemaster]", "~]# vi /etc/timemaster.conf", "[ptp_domain 0] interfaces eth0 [ptp_domain 1] interfaces eth1", "~]# systemctl restart timemaster", "clock_servo linreg", "~]# systemctl restart ptp4l", "-E linreg", "~]# systemctl restart phc2sys" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/system_administrators_guide/ch-configuring_ptp_using_ptp4l
Chapter 9. Setting up to Develop Containerized Applications
Chapter 9. Setting up to Develop Containerized Applications Red Hat supports the development of containerized applications based on Red Hat Enterprise Linux, Red Hat OpenShift , and a number of other Red Hat products. Red Hat Container Development Kit (CDK) provides a Red Hat Enterprise Linux virtual machine that runs a single-node Red Hat OpenShift 3 cluster. It does not support OpenShift 4. Follow the instructions in the Red Hat Container Development Kit Getting Started Guide, Chapter 1.4., Installing CDK . Red Hat CodeReady Containers (CRC) brings a minimal OpenShift 4 cluster to your local computer, providing a minimal environment for development and testing purposes. CodeReady Containers is mainly targeted at running on developers' desktops. Red Hat Development Suite provides Red Hat tools for the development of containerized applications in Java, C, and C++. It consists of Red Hat JBoss Developer Studio , OpenJDK , Red Hat Container Development Kit , and other minor components. To install DevSuite , follow the instructions in the Red Hat Development Suite Installation Guide . .NET Core 3.1 is a general-purpose development platform for building high-quality applications that run on the OpenShift Container Platform versions 3.3 and later. For installation and usage instructions, see the .NET Core Getting Started Guide Chapter 2., Using .NET Core 3.1 on Red Hat OpenShift Container Platform . Additional Resources Red Hat CodeReady Studio - Getting Started with Container and Cloud-based Development Product Documentation for Red Hat Container Development Kit Product Documentation for OpenShift Container Platform Red Hat Enterprise Linux Atomic Host - Overview of Containers in Red Hat Systems
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/developer_guide/setting-up_setup-developing-containers
Chapter 19. OverlappingRangeIPReservation [whereabouts.cni.cncf.io/v1alpha1]
Chapter 19. OverlappingRangeIPReservation [whereabouts.cni.cncf.io/v1alpha1] Description OverlappingRangeIPReservation is the Schema for the OverlappingRangeIPReservations API Type object Required spec 19.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object OverlappingRangeIPReservationSpec defines the desired state of OverlappingRangeIPReservation 19.1.1. .spec Description OverlappingRangeIPReservationSpec defines the desired state of OverlappingRangeIPReservation Type object Required podref Property Type Description containerid string ifname string podref string 19.2. API endpoints The following API endpoints are available: /apis/whereabouts.cni.cncf.io/v1alpha1/overlappingrangeipreservations GET : list objects of kind OverlappingRangeIPReservation /apis/whereabouts.cni.cncf.io/v1alpha1/namespaces/{namespace}/overlappingrangeipreservations DELETE : delete collection of OverlappingRangeIPReservation GET : list objects of kind OverlappingRangeIPReservation POST : create an OverlappingRangeIPReservation /apis/whereabouts.cni.cncf.io/v1alpha1/namespaces/{namespace}/overlappingrangeipreservations/{name} DELETE : delete an OverlappingRangeIPReservation GET : read the specified OverlappingRangeIPReservation PATCH : partially update the specified OverlappingRangeIPReservation PUT : replace the specified OverlappingRangeIPReservation 19.2.1. /apis/whereabouts.cni.cncf.io/v1alpha1/overlappingrangeipreservations HTTP method GET Description list objects of kind OverlappingRangeIPReservation Table 19.1. HTTP responses HTTP code Reponse body 200 - OK OverlappingRangeIPReservationList schema 401 - Unauthorized Empty 19.2.2. /apis/whereabouts.cni.cncf.io/v1alpha1/namespaces/{namespace}/overlappingrangeipreservations HTTP method DELETE Description delete collection of OverlappingRangeIPReservation Table 19.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind OverlappingRangeIPReservation Table 19.3. HTTP responses HTTP code Reponse body 200 - OK OverlappingRangeIPReservationList schema 401 - Unauthorized Empty HTTP method POST Description create an OverlappingRangeIPReservation Table 19.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 19.5. Body parameters Parameter Type Description body OverlappingRangeIPReservation schema Table 19.6. HTTP responses HTTP code Reponse body 200 - OK OverlappingRangeIPReservation schema 201 - Created OverlappingRangeIPReservation schema 202 - Accepted OverlappingRangeIPReservation schema 401 - Unauthorized Empty 19.2.3. /apis/whereabouts.cni.cncf.io/v1alpha1/namespaces/{namespace}/overlappingrangeipreservations/{name} Table 19.7. Global path parameters Parameter Type Description name string name of the OverlappingRangeIPReservation HTTP method DELETE Description delete an OverlappingRangeIPReservation Table 19.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 19.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified OverlappingRangeIPReservation Table 19.10. HTTP responses HTTP code Reponse body 200 - OK OverlappingRangeIPReservation schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified OverlappingRangeIPReservation Table 19.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 19.12. HTTP responses HTTP code Reponse body 200 - OK OverlappingRangeIPReservation schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified OverlappingRangeIPReservation Table 19.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 19.14. Body parameters Parameter Type Description body OverlappingRangeIPReservation schema Table 19.15. HTTP responses HTTP code Reponse body 200 - OK OverlappingRangeIPReservation schema 201 - Created OverlappingRangeIPReservation schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/network_apis/overlappingrangeipreservation-whereabouts-cni-cncf-io-v1alpha1
4.74. gnome-power-manager
4.74. gnome-power-manager 4.74.1. RHBA-2012:1228 - gnome-packagekit bug fix update Updated gnome-packagekit packages that fix a bug are now available for Red Hat Enterprise Linux 6 Extended Update Support. The gnome-packagekit packages provide session applications for the PackageKit API. Bug Fix BZ# 822946 Previously, it was possible for the user to log out of the system or shut it down while the PackageKit update tool was running and writing to the RPM database (rpmdb). Consequently, rpmdb could become damaged and inconsistent due to the unexpected termination and cause various problems with subsequent operation of the rpm, yum, and PackageKit utilities. This update modifies PackageKit to not allow shutting down the system when a transaction writing to rpmdb is active, thus fixing this bug. Users of gnome-packagekit are advised to upgrade to these updated packages, which fix this bug. 4.74.2. RHBA-2012:0686 - gnome-power-manager bug fix update Updated gnome-power-manager packages that fix one bug are now available for Red Hat Enterprise Linux 6. GNOME Power Manager uses the information and facilities provided by DeviceKit-power to display icons and handle user callbacks in an interactive GNOME session. Bug Fix BZ# 800267 After resuming the system or re-enabling the display, an icon could appear in the notification area with an erroneous tooltip that read "Session active, not inhibited, screen idle. If you see this test, your display server is broken and you should notify your distributor." and included a URL to an external web page. This error message was incorrect, had no effect on the system and could be safely ignored. In addition, linking to an external URL from the notification and status area is unwanted. To prevent this, the icon is no longer used for debugging idle problems. All users are advised to upgrade to these updated gnome-power-manager packages, which fix this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/gnome-power-manager
Viewing and managing your subscription inventory on the Hybrid Cloud Console
Viewing and managing your subscription inventory on the Hybrid Cloud Console Subscription Central 1-latest Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/subscription_central/1-latest/html-single/viewing_and_managing_your_subscription_inventory_on_the_hybrid_cloud_console/index
Chapter 74. Coverage reports for test scenarios
Chapter 74. Coverage reports for test scenarios The test scenario designer provides a clear and coherent way of displaying the test coverage statistics using the Coverage Report tab on the right side of the test scenario designer. You can also download the coverage report to view and analyze the test coverage statistics. Downloaded test scenario coverage report supports the .CSV file format. For more information about the RFC specification for the Comma-Separated Values (CSV) format, see Common Format and MIME Type for Comma-Separated Values (CSV) Files . You can view the coverage report for rule-based and DMN-based test scenarios. 74.1. Generating coverage reports for rule-based test scenarios In rule-based test scenarios, the Coverage Report tab contains the detailed information about the following: Number of available rules Number of fired rules Percentage of fired rules Percentage of executed rules represented as a pie chart Number of times each rule has executed The rules that are executed for each defined test scenario Follow the procedure to generate a coverage report for rule-based test scenarios: Prerequisites The rule-based test scenario template are created for the selected test scenario. For more information about creating rule-based test scenarios, see Section 65.1, "Creating a test scenario template for rule-based test scenarios" . The individual test scenarios are defined. For more information about defining a test scenario, see Chapter 67, Defining a test scenario . Note To generate the coverage report for rule-based test scenario, you must create at least one rule. Procedure Open the rule-based test scenarios in the test scenario designer. Run the defined test scenarios. Click Coverage Report on the right of the test scenario designer to display the test coverage statistics. Optional: To download the test scenario coverage report, Click Download report . 74.2. Generating coverage reports for DMN-based test scenarios In DMN-based test scenarios, the Coverage Report tab contains the detailed information about the following: Number of available decisions Number of executed decisions Percentage of executed decisions Percentage of executed decisions represented as a pie chart Number of times each decision has executed Decisions that are executed for each defined test scenario Follow the procedure to generate a coverage report for DMN-based test scenarios: Prerequisites The DMN-based test scenario template is created for the selected test scenario. For more information about creating DMN-based test scenarios, see Section 66.1, "Creating a test scenario template for DMN-based test scenarios" . The individual test scenarios are defined. For more information about defining a test scenario, see Chapter 67, Defining a test scenario . Procedure Open the DMN-based test scenarios in the test scenario designer. Run the defined test scenarios. Click Coverage Report on the right of the test scenario designer to display the test coverage statistics. Optional: To download the test scenario coverage report, Click Download report .
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_decision_services_in_red_hat_process_automation_manager/test-scenarios-coverage-report-con_test-scenarios
7.1 Release Notes
7.1 Release Notes Red Hat Ceph Storage 7.1 Release notes for features and enhancements, known issues, and other important release information. Red Hat Ceph Storage Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/7.1_release_notes/index
Chapter 73. Exporting and importing test scenario spreadsheets
Chapter 73. Exporting and importing test scenario spreadsheets These sections show how to export and import test scenario spreadsheets in the test scenario designer. You can analyze and manage test scenario spreadsheets with software such as Microsoft Excel or LibreOffice Calc. Test scenario designer supports the .CSV file format. For more information about the RFC specification for the Comma-Separated Values (CSV) format, see Common Format and MIME Type for Comma-Separated Values (CSV) Files . 73.1. Exporting a test scenario spreadsheet Follow the procedure below to export a test scenario spreadsheet using the Test Scenario designer. Procedure In the Test Scenario designer toolbar on the upper-right, click Export button. Select a destination in your local file directory and confirm to save the .CSV file. The .CSV file is exported to your local machine. 73.2. Importing a test scenario spreadsheet Follow the procedure below to import a test scenario spreadsheet using the Test Scenario designer. Procedure In the Test Scenario designer toolbar on the upper-right, click Import button. In the Select file to Import prompt, click Choose File... and select the .CSV file you would like to import from your local file directory. Click Import . The .CSV file is imported to the Test Scenario designer. Warning You must not modify the headers in the selected .CSV file. Otherwise, the spreadsheet may not be successfully imported.
null
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/developing_decision_services_in_red_hat_decision_manager/test-designer-test-scenario-export-import-spreadsheet-con
Chapter 4. Publishing applications with .NET 9.0
Chapter 4. Publishing applications with .NET 9.0 .NET 9.0 applications can be published to use a shared system-wide version of .NET or to include .NET. The following methods exist for publishing .NET 9.0 applications: Self-contained deployment (SCD) - The application includes .NET. This method uses a runtime built by Microsoft. Framework-dependent deployment (FDD) - The application uses a shared system-wide version of .NET. Note When publishing an application for RHEL, Red Hat recommends using FDD, because it ensures that the application is using an up-to-date version of .NET, built by Red Hat, that uses a set of native dependencies. Prerequisites Existing .NET application. For more information about how to create a .NET application, see Creating an application using .NET . 4.1. Publishing .NET applications The following procedure outlines how to publish a framework-dependent application. Procedure Publish the framework-dependent application: Replace my-app with the name of the application you want to publish. Optional: If the application is for RHEL only, trim out the dependencies needed for other platforms: Replace architecture based on the platform you are using: For Intel: x64 For IBM Z and LinuxONE: s390x For 64-bit Arm: arm64 For IBM Power: ppc64le
[ "dotnet publish my-app -f net9.0", "dotnet publish my-app -f net9.0 -r rhel.9- architecture --self-contained false" ]
https://docs.redhat.com/en/documentation/net/9.0/html/getting_started_with_.net_on_rhel_9/assembly_publishing-apps-using-dotnet_getting-started-with-dotnet-on-rhel-9
4.7. Adding Journals to a File System
4.7. Adding Journals to a File System The gfs2_jadd command is used to add journals to a GFS2 file system. You can add journals to a GFS2 file system dynamically at any point without expanding the underlying logical volume. The gfs2_jadd command must be run on a mounted file system, but it needs to be run on only one node in the cluster. All the other nodes sense that the expansion has occurred. Note If a GFS2 file system is full, the gfs2_jadd will fail, even if the logical volume containing the file system has been extended and is larger than the file system. This is because in a GFS2 file system, journals are plain files rather than embedded metadata, so simply extending the underlying logical volume will not provide space for the journals. Before adding journals to a GFS file system, you can use the journals option of the gfs2_tool to find out how many journals the GFS2 file system currently contains. The following example displays the number and size of the journals in the file system mounted at /mnt/gfs2 . Usage Number Specifies the number of new journals to be added. MountPoint Specifies the directory where the GFS2 file system is mounted. Examples In this example, one journal is added to the file system on the /mygfs2 directory. In this example, two journals are added to the file system on the /mygfs2 directory. Complete Usage MountPoint Specifies the directory where the GFS2 file system is mounted. Device Specifies the device node of the file system. Table 4.4, "GFS2-specific Options Available When Adding Journals" describes the GFS2-specific options that can be used when adding journals to a GFS2 file system. Table 4.4. GFS2-specific Options Available When Adding Journals Flag Parameter Description -h Help. Displays short usage message. -J MegaBytes Specifies the size of the new journals in megabytes. Default journal size is 128 megabytes. The minimum size is 32 megabytes. To add journals of different sizes to the file system, the gfs2_jadd command must be run for each size journal. The size specified is rounded down so that it is a multiple of the journal-segment size that was specified when the file system was created. -j Number Specifies the number of new journals to be added by the gfs2_jadd command. The default value is 1. -q Quiet. Turns down the verbosity level. -V Displays command version information.
[ "gfs2_tool journals /mnt/gfs2 journal2 - 128MB journal1 - 128MB journal0 - 128MB 3 journal(s) found.", "gfs2_jadd -j Number MountPoint", "gfs2_jadd -j1 /mygfs2", "gfs2_jadd -j2 /mygfs2", "gfs2_jadd [ Options ] { MountPoint | Device } [ MountPoint | Device ]" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/global_file_system_2/s1-manage-addjournalfs
Chapter 5. Observability UI plugins
Chapter 5. Observability UI plugins 5.1. Observability UI plugins overview You can use the Cluster Observability Operator (COO) to install and manage UI plugins to enhance the observability capabilities of the OpenShift Container Platform web console. The plugins extend the default functionality, providing new UI features for troubleshooting, distributed tracing, and cluster logging. 5.1.1. Cluster logging The logging UI plugin surfaces logging data in the web console on the Observe Logs page. You can specify filters, queries, time ranges and refresh rates. The results displayed a list of collapsed logs, which can then be expanded to show more detailed information for each log. For more information, see the logging UI plugin page. 5.1.2. Troubleshooting Important The Cluster Observability Operator troubleshooting panel UI plugin is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . The troubleshooting panel UI plugin for OpenShift Container Platform version 4.16+ provides observability signal correlation, powered by the open source Korrel8r project. You can use the troubleshooting panel available from the Observe Alerting page to easily correlate metrics, logs, alerts, netflows, and additional observability signals and resources, across different data stores. Users of OpenShift Container Platform version 4.17+ can also access the troubleshooting UI panel from the Application Launcher . The output of Korrel8r is displayed as an interactive node graph. When you click on a node, you are automatically redirected to the corresponding web console page with the specific information for that node, for example, metric, log, or pod. For more information, see the troubleshooting UI plugin page. 5.1.3. Distributed tracing Important The Cluster Observability Operator distributed tracing UI plugin is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . The distributed tracing UI plugin adds tracing-related features to the web console on the Observe Traces page. You can follow requests through the front end and into the backend of microservices, helping you identify code errors and performance bottlenecks in distributed systems. You can select a supported TempoStack or TempoMonolithic multi-tenant instance running in the cluster and set a time range and query to view the trace data. For more information, see the distributed tracing UI plugin page. 5.2. Logging UI plugin The logging UI plugin surfaces logging data in the OpenShift Container Platform web console on the Observe Logs page. You can specify filters, queries, time ranges and refresh rates, with the results displayed as a list of collapsed logs, which can then be expanded to show more detailed information for each log. When you have also deployed the Troubleshooting UI plugin on OpenShift Container Platform version 4.16+, it connects to the Korrel8r service and adds direct links from the Administration perspective, from the Observe Logs page, to the Observe Metrics page with a correlated PromQL query. It also adds a See Related Logs link from the Administration perspective alerting detail page, at Observe Alerting , to the Observe Logs page with a correlated filter set selected. The features of the plugin are categorized as: dev-console Adds the logging view to the Developer perspective. alerts Merges the web console alerts with log-based alerts defined in the Loki ruler. Adds a log-based metrics chart in the alert detail view. dev-alerts Merges the web console alerts with log-based alerts defined in the Loki ruler. Adds a log-based metrics chart in the alert detail view for the Developer perspective. For Cluster Observability Operator (COO) versions, the support for these features in OpenShift Container Platform versions is shown in the following table: COO version OCP versions Features 0.3.0+ 4.12 dev-console 0.3.0+ 4.13 dev-console , alerts 0.3.0+ 4.14+ dev-console , alerts , dev-alerts 5.2.1. Installing the Cluster Observability Operator logging UI plugin Prerequisites You have access to the cluster as a user with the cluster-admin role. You have logged in to the OpenShift Container Platform web console. You have installed the Cluster Observability Operator. You have a LokiStack instance in your cluster. Procedure In the OpenShift Container Platform web console, click Operators Installed Operators and select Cluster Observability Operator. Choose the UI Plugin tab (at the far right of the tab list) and click Create UIPlugin . Select YAML view , enter the following content, and then click Create : apiVersion: observability.openshift.io/v1alpha1 kind: UIPlugin metadata: name: logging spec: type: Logging logging: lokiStack: name: logging-loki logsLimit: 50 timeout: 30s 5.3. Distributed tracing UI plugin Important The Cluster Observability Operator distributed tracing UI plugin is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . The distributed tracing UI plugin adds tracing-related features to the Administrator perspective of the OpenShift web console at Observe Traces . You can follow requests through the front end and into the backend of microservices, helping you identify code errors and performance bottlenecks in distributed systems. 5.3.1. Installing the Cluster Observability Operator distributed tracing UI plugin Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have logged in to the OpenShift Container Platform web console. You have installed the Cluster Observability Operator Procedure In the OpenShift Container Platform web console, click Operators Installed Operators and select Cluster Observability Operator Choose the UI Plugin tab (at the far right of the tab list) and press Create UIPlugin Select YAML view , enter the following content, and then press Create : apiVersion: observability.openshift.io/v1alpha1 kind: UIPlugin metadata: name: distributed-tracing spec: type: DistributedTracing 5.3.2. Using the Cluster Observability Operator distributed tracing UI plugin Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have logged in to the OpenShift Container Platform web console. You have installed the Cluster Observability Operator. You have installed the Cluster Observability Operator distributed tracing UI plugin. You have a TempoStack or TempoMonolithic multi-tenant instance in the cluster. Procedure In the Administrator perspective of the OpenShift Container Platform web console, click Observe Traces . Select a TempoStack or TempoMonolithic multi-tenant instance and set a time range and query for the traces to be loaded. The traces are displayed on a scatter-plot showing the trace start time, duration, and number of spans. Underneath the scatter plot, there is a list of traces showing information such as the Trace Name , number of Spans , and Duration . Click on a trace name link. The trace detail page for the selected trace contains a Gantt Chart of all of the spans within the trace. Select a span to show a breakdown of the configured attributes. 5.4. Troubleshooting UI plugin Important The Cluster Observability Operator troubleshooting panel UI plugin is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . The troubleshooting UI plugin for OpenShift Container Platform version 4.16+ provides observability signal correlation, powered by the open source Korrel8r project. With the troubleshooting panel that is available under Observe Alerting , you can easily correlate metrics, logs, alerts, netflows, and additional observability signals and resources, across different data stores. Users of OpenShift Container Platform version 4.17+ can also access the troubleshooting UI panel from the Application Launcher . When you install the troubleshooting UI plugin, a Korrel8r service named korrel8r is deployed in the same namespace, and it is able to locate related observability signals and Kubernetes resources from its correlation engine. The output of Korrel8r is displayed in the form of an interactive node graph in the OpenShift Container Platform web console. Nodes in the graph represent a type of resource or signal, while edges represent relationships. When you click on a node, you are automatically redirected to the corresponding web console page with the specific information for that node, for example, metric, log, pod. 5.4.1. Installing the Cluster Observability Operator Troubleshooting UI plugin Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have logged in to the OpenShift Container Platform web console. You have installed the Cluster Observability Operator Procedure In the OpenShift Container Platform web console, click Operators Installed Operators and select Cluster Observability Operator Choose the UI Plugin tab (at the far right of the tab list) and press Create UIPlugin Select YAML view , enter the following content, and then press Create : apiVersion: observability.openshift.io/v1alpha1 kind: UIPlugin metadata: name: troubleshooting-panel spec: type: TroubleshootingPanel 5.4.2. Using the Cluster Observability Operator troubleshooting UI plugin Prerequisites You have access to the OpenShift Container Platform cluster as a user with the cluster-admin cluster role. If your cluster version is 4.17+, you can access the troubleshooting UI panel from the Application Launcher . You have logged in to the OpenShift Container Platform web console. You have installed OpenShift Container Platform Logging, if you want to visualize correlated logs. You have installed OpenShift Container Platform Network Observability, if you want to visualize correlated netflows. You have installed the Cluster Observability Operator. You have installed the Cluster Observability Operator troubleshooting UI plugin. Note The troubleshooting panel relies on the observability signal stores installed in your cluster. Kuberenetes resources, alerts and metrics are always available by default in an OpenShift Container Platform cluster. Other signal types require optional components to be installed: Logs: Red Hat Openshift Logging (collection) and Loki Operator provided by Red Hat (store) Network events: Network observability provided by Red Hat (collection) and Loki Operator provided by Red Hat (store) Procedure In the admin perspective of the web console, navigate to Observe Alerting and then select an alert. If the alert has correlated items, a Troubleshooting Panel link will appear above the chart on the alert detail page. Click on the Troubleshooting Panel link to display the panel. The panel consists of query details and a topology graph of the query results. The selected alert is converted into a Korrel8r query string and sent to the korrel8r service. The results are displayed as a graph network connecting the returned signals and resources. This is a neighbourhood graph, starting at the current resource and including related objects up to 3 steps away from the starting point. Clicking on nodes in the graph takes you to the corresponding web console pages for those resouces. You can use the troubleshooting panel to find resources relating to the chosen alert. Note Clicking on a node may sometimes show fewer results than indicated on the graph. This is a known issue that will be addressed in a future release. Alert (1): This node is the starting point in the graph and represents the KubeContainerWaiting alert displayed in the web console. Pod (1): This node indicates that there is a single Pod resource associated with this alert. Clicking on this node will open a console search showing the related pod directly. Event (2): There are two Kuberenetes events associated with the pod. Click this node to see the events. Logs (74): This pod has 74 lines of logs, which you can access by clicking on this node. Metrics (105): There are many metrics associated with the pod. Network (6): There are network events, meaning the pod has communicated over the network. The remaining nodes in the graph represent the Service , Deployment and DaemonSet resources that the pod has communicated with. Focus: Clicking this button updates the graph. By default, the graph itself does not change when you click on nodes in the graph. Instead, the main web console page changes, and you can then navigate to other resources using links on the page, while the troubleshooting panel itself stays open and unchanged. To force an update to the graph in the troubleshooting panel, click Focus . This draws a new graph, using the current resource in the web console as the starting point. Show Query: Clicking this button enables some experimental features: Hide Query hides the experimental features. The query that identifies the starting point for the graph. The query language, part of the Korrel8r correlation engine used to create the graphs, is experimental and may change in future. The query is updated by the Focus button to correspond to the resources in the main web console window. Neighbourhood depth is used to display a smaller or larger neighbourhood. Note Setting a large value in a large cluster might cause the query to fail, if the number of results is too big. Goal class results in a goal directed search instead of a neighbourhood search. A goal directed search shows all paths from the starting point to the goal class, which indicates a type of resource or signal. The format of the goal class is experimental and may change. Currently, the following goals are valid: k8s: RESOURCE[VERSION.[GROUP]] identifying a kind of kuberenetes resource. For example k8s:Pod or k8s:Deployment.apps.v1 . alert:alert representing any alert. metric:metric representing any metric. netflow:network representing any network observability network event. log: LOG_TYPE representing stored logs, where LOG_TYPE must be one of application , infrastructure or audit . 5.4.3. Creating the example alert To trigger an alert as a starting point to use in the troubleshooting UI panel, you can deploy a container that is deliberately misconfigured. Procedure Use the following YAML, either from the command line or in the web console, to create a broken deployment in a system namespace: apiVersion: apps/v1 kind: Deployment metadata: name: bad-deployment namespace: default 1 spec: selector: matchLabels: app: bad-deployment template: metadata: labels: app: bad-deployment spec: containers: 2 - name: bad-deployment image: quay.io/openshift-logging/vector:5.8 1 The deployment must be in a system namespace (such as default ) to cause the desired alerts. 2 This container deliberately tries to start a vector server with no configuration file. The server logs a few messages, and then exits with an error. Alternatively, you can deploy any container you like that is badly configured, causing it to trigger an alert. View the alerts: Go to Observe Alerting and click clear all filters . View the Pending alerts. Important Alerts first appear in the Pending state. They do not start Firing until the container has been crashing for some time. By viewing Pending alerts, you do not have to wait as long to see them occur. Choose one of the KubeContainerWaiting , KubePodCrashLooping , or KubePodNotReady alerts and open the troubleshooting panel by clicking on the link. Alternatively, if the panel is already open, click the "Focus" button to update the graph.
[ "apiVersion: observability.openshift.io/v1alpha1 kind: UIPlugin metadata: name: logging spec: type: Logging logging: lokiStack: name: logging-loki logsLimit: 50 timeout: 30s", "apiVersion: observability.openshift.io/v1alpha1 kind: UIPlugin metadata: name: distributed-tracing spec: type: DistributedTracing", "apiVersion: observability.openshift.io/v1alpha1 kind: UIPlugin metadata: name: troubleshooting-panel spec: type: TroubleshootingPanel", "apiVersion: apps/v1 kind: Deployment metadata: name: bad-deployment namespace: default 1 spec: selector: matchLabels: app: bad-deployment template: metadata: labels: app: bad-deployment spec: containers: 2 - name: bad-deployment image: quay.io/openshift-logging/vector:5.8" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/cluster_observability_operator/observability-ui-plugins
Using AMQ Streams on OpenShift
Using AMQ Streams on OpenShift Red Hat AMQ 2021.q2 For use with AMQ Streams 1.7 on OpenShift Container Platform
null
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q2/html/using_amq_streams_on_openshift/index
Architecture
Architecture Red Hat Advanced Cluster Security for Kubernetes 4.7 System architecture Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.7/html/architecture/index
Chapter 35. Using the Red Hat Process Automation Manager installer
Chapter 35. Using the Red Hat Process Automation Manager installer This section describes how to install Business Central and KIE Server using the installer JAR file. The JAR file is an executable file that installs Red Hat Process Automation Manager in an existing Red Hat JBoss Web Server 5.5.1 server installation. You can run the installer in interactive or command line interface (CLI) mode. steps: Follow the instructions in one of the following sections: Section 35.1, "Using the installer in interactive mode" Section 35.2, "Using the installer in CLI mode" 35.1. Using the installer in interactive mode The installer for Red Hat Process Automation Manager is an executable JAR file. You can use it to install Red Hat Process Automation Manager in an existing Red Hat JBoss Web Server 5.5.1 server installation. Note For security reasons, you should run the installer as a non-root user. Prerequisites The Red Hat Process Automation Manager 7.13.5 Installer has been downloaded. For instructions, see Chapter 34, Downloading the Red Hat Process Automation Manager installation files . A supported JDK is installed. For a list of supported JDKs, see Red Hat Process Automation Manager 7 Supported Configurations . A backed-up Red Hat JBoss Web Server 5.5.1 server installation is available. Sufficient user permissions to complete the installation are granted. Note Ensure that you are logged in with a user that has write permission for Tomcat. The JAR binary is included in USDPATH environment variable. On Red Hat Enterprise Linux, it is included in the java-USD JAVA_VERSION -openjdk-devel package. Note Red Hat Process Automation Manager is designed to work with UTF-8 encoding. If a different encoding system is used by the underlying JVM, unexpected errors might occur. To ensure UTF-8 is used by the JVM, use the "-Dfile.encoding=UTF-8" system property. For a list of system properties, see Appendix B, Business Central system properties . Procedure In a terminal window, navigate to the directory where you downloaded the installer JAR file and enter the following command: Note When running the installer on Windows, you may be prompted to provide administrator credentials during the installation. To prevent this requirement, add the izpack.mode=privileged option to the installation command: Furthermore, when running the installer on a 32-bit Java virtual machine, you might encounter memory limitations. To prevent this issue, run this command: The graphical installer displays a splash screen and a license agreement page. Click I accept the terms of this license agreement and click . Specify the Red Hat JBoss Web Server 5.5.1 server home where you want to install Red Hat Process Automation Manager and click . Select the components that you want to install and click . You cannot install Business Central on Red Hat JBoss Web Server. You can only install it on Red Hat JBoss EAP. However, you can install KIE Server and the headless Process Automation Manager controller on Red Hat JBoss Web Server. The headless Process Automation Manager controller is used to manage KIE Server. Install the headless Process Automation Manager controller if you plan to manage multiple KIE Server instances. Create a user and click . By default, if you install both Business Central and KIE Server in the same container the new user is given the admin , kie-server , and rest-all roles. If you install only KIE Server, the user is given the kie-server role. The kie-server role is required to access KIE Server REST capabilities. Note Make sure that the specified user name is not the same as an existing user, role, or group. For example, do not create a user with the user name admin . The password must have at least eight characters and must contain at least one number and one non-alphanumeric character, but not & (ampersand). Make a note of the user name and password. You will need them to access Business Central and KIE Server. On the Installation Overview page, review the components that you will install and click to start the installation. When the installation has completed, click . If KIE Server is installed, the Configure Runtime step appears under Component Installation . On the Configure Runtime Environment page, choose to perform the default installation or perform an advanced configuration. If you choose Perform advanced configuration , you can choose to configure database settings or customize certain KIE Server options. If you selected Customize database settings , on the JDBC Drive Configuration page specify a data source JDBC driver vendor, select one or more driver JAR files, and click . A data source is an object that enables a Java Database Connectivity (JDBC) client, such as an application server, to establish a connection with a database. Applications look up the data source on the Java Naming and Directory Interface (JNDI) tree or in the local application context and request a database connection to retrieve data. You must configure data sources for KIE Server to ensure correct data exchange between the servers and the designated database. If you selected Customize KIE Server properties , on the KIE Server Properties Configuration page change any of the following properties: Change the value of KIE Server ID to change the name of the KIE Server property. Deselect any KIE Server functions that you want to disable. Click to configure the runtime environment. When Processing finished appears at the top of the screen, click to complete the installation. Optional: Click Generate Installation Script and Properties File to save the installation data in XML files, and then click Done . The installer generates two files. The auto.xml file automates future installations and the auto.xml.variables file stores user passwords and other sensitive variables. Use the auto.xml file to repeat the Red Hat Process Automation Manager installation on multiple systems with the same type of server and the same configuration as the original installation. If necessary, update the installpath parameter in the auto.xml file. To perform an installation using the XML file, enter the following command: You have successfully installed Red Hat Process Automation Manager using the installer. If you installed only Business Central, repeat these steps to install KIE Server on a separate server. Note If you use Microsoft SQL Server, make sure you have configured applicable transaction isolation for your database. If you do not, you may experience deadlocks. The recommended configuration is to turn on ALLOW_SNAPSHOT_ISOLATION and READ_COMMITTED_SNAPSHOT by entering the following statements: 35.2. Using the installer in CLI mode You can use the command-line interface (CLI) to run the Red Hat Process Automation Manager installer. Note For security reasons, you should run the installer as a non-root user. Prerequisites The Red Hat Process Automation Manager 7.13.5 Installer has been downloaded. For instructions, see Chapter 34, Downloading the Red Hat Process Automation Manager installation files . A supported JDK is installed. For a list of supported JDKs, see Red Hat Process Automation Manager 7 Supported Configurations . A backed-up Red Hat JBoss Web Server 5.5.1 server installation is available. Sufficient user permissions to complete the installation are granted. Note Ensure that you are logged in with a user that has write permission for Tomcat. The JAR binary is included in the USDPATH environment variable. On Red Hat Enterprise Linux, it is included in the java-USD JAVA_VERSION -openjdk-devel package. Note Red Hat Process Automation Manager is designed to work with UTF-8 encoding. If a different encoding system is used by the underlying JVM, unexpected errors might occur. To ensure UTF-8 is used by the JVM, use the "-Dfile.encoding=UTF-8" system property. For a list of system properties, see Appendix B, Business Central system properties . Procedure In a terminal window, navigate to the directory where you downloaded the installer file and enter the following command: The command-line interactive process will start and display the End-User License Agreement. Read the license agreement, enter 1 , and press Enter to continue: Enter the parent directory of an existing Red Hat JBoss Web Server 5.5.1 installation. The installer will verify the location of the installation at the location provided. Enter 1 to confirm and continue. Follow the instructions in the installer to complete the installation. Note When you create the user name and password, make sure that the specified user name does not conflict with any known title of a role or a group. For example, if there is a role called admin , you should not create a user with the user name admin . The password must have at least eight characters and must contain at least one number and one non-alphanumeric character ( not including the character & ). Make a note of the user name and password. You will need them to access Business Central and KIE Server. When the installation has completed, you will see this message: Enter y to create XML files that contain the installation data, or n to complete the installation. If you enter y , you are prompted to specify a path for the XML files. Enter a path or press the Enter key to accept the suggested path. The installer generates two files. The auto.xml file automates future installations and the auto.xml.variables file stores user passwords and other sensitive variables. Use the auto.xml file on multiple systems to easily repeat a Red Hat Process Automation Manager installation on the same type of server with the same configuration as the original installation. If necessary, update the installpath parameter in the auto.xml file. To perform an installation using the XML file, enter the following command: If you installed only KIE Server, repeat these steps to install the headless Process Automation Manager controller on a separate server. Note If you use Microsoft SQL Server, make sure you have configured applicable transaction isolation for your database. If you do not, you may experience deadlocks. The recommended configuration is to turn on ALLOW_SNAPSHOT_ISOLATION and READ_COMMITTED_SNAPSHOT by entering the following statements:
[ "java -jar rhpam-installer-7.13.5.jar", "java -Dizpack.mode=privileged -jar rhpam-installer-7.13.5.jar", "java -XX:MaxHeapSize=4g -jar rhpam-installer-7.13.5.jar", "java -jar rhpam-installer-7.13.5.jar <path-to-auto.xml-file>", "ALTER DATABASE <DBNAME> SET ALLOW_SNAPSHOT_ISOLATION ON ALTER DATABASE <DBNAME> SET READ_COMMITTED_SNAPSHOT ON", "java -jar rhpam-installer-7.13.5.jar -console", "press 1 to continue, 2 to quit, 3 to redisplay.", "Specify the home directory of one of the following servers: Red Hat JBoss EAP 7 or Red Hat JBoss Web Server 5. For more information, see https://access.redhat.com/articles/3405381[Red Hat Process Automation Manager 7 Supported Configurations].", "Would you like to generate an automatic installation script and properties file?", "java -jar rhpam-installer-7.13.5.jar <path-to-auto.xml-file>", "ALTER DATABASE <DBNAME> SET ALLOW_SNAPSHOT_ISOLATION ON ALTER DATABASE <DBNAME> SET READ_COMMITTED_SNAPSHOT ON" ]
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/installing_and_configuring_red_hat_process_automation_manager/assembly_installing-using-installer_install-on-jws
Chapter 3. Enhancements
Chapter 3. Enhancements This section describes the major enhancements introduced in Red Hat OpenShift Data foundation 4.15. 3.1. Deployment of one active and one standby MGR pods by OCS Operator The ocs-operator now deploys two MGR pods by default, one active and one standby. This enhancement does not impact cluster resource requirements. 3.2. Support for custom timeouts for Reclaim Space operation Custom timeout values can be set for the reclaim space operation to avoid the failure of the operation with the error context deadline exceeded . The error would occur depending on the RBD volume size and its data pattern. For more information, see Enabling reclaim space operation using ReclaimSpaceJob . 3.3. Modularized must-gather utility OpenShift Data Foundation must-gather utility can be run in a modular mode and collect only the resources that are required. This enhancement helps to avoid long duration of time taken to run must-gather in some environments as well as focus on the inspected components faster. For more information, see Downloading log files and diagnostic information using must-gather . 3.4. Prehook to MCG's database pod to gracefully flush caches when the pod is going down A prehook to Multicloud Object Gateway's database pod (DB pod) is added to gracefully flush the cache when the pod is going down. This graceful shutdown reduces the risk of corruption in the journal file of the DB when the DB pod is taken down in a planned manner. However, this is not applicable for the shutdowns through OpenShift node crash or such. 3.5. All controller operations to reach one controller When a CSI-driver provides the CONTROLLER_SERVICE capability, the sidecar tries to become the leader by obtaining a lease based on the name of the CSI-driver. The Kubernetes CSI-Addons Operator tries to connect to the random CSI-Addons sidecar that is registered and try to make the RPC calls to the random sidecar. This can create a problem if the CSI-driver has implemented some internal locking mechanism or has some local cache for the lifetime of that instance. The NetworkFence (and other CSI-Addons) operations are only sent to a CSI-Addons sidecar that has the CONTROLLER_SERVICE capability. There is a single leader for the CSI-Addons sidecars that support that, and the leader can be identified by the Lease object for the CSI-drivername . 3.6. Enhanced data distribution for CephFS storage class This feature enables the default subvolume groups of Container Storage Interface (CSI) to be automatically pinned to the ranks according to the default pinning configuration. This is useful when you have multiple active CephFS metadata servers (MDSs) in the cluster. This helps to better distribute the load across MDS ranks in stable and predictable ways. 3.7. Ability to use bluestore-rdr as object storage device backing store OpenShift Data Foundation provides the ability to use bluestore-rdr as the object storage device (OSD) backing store for the Brownfield customers. This bluestore-rdr has improved performance over bluestore backend store, which is important when the cluster is required to be used for Regional Disaster Recovery (RDR). Also, it is possible to migrate the OSDs to bluestore-rdr from the user interface.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/4.15_release_notes/enhancements
Chapter 27. Using Ansible to automate group membership in IdM
Chapter 27. Using Ansible to automate group membership in IdM Using automatic group membership, you can assign users and hosts user groups and host groups automatically, based on their attributes. For example, you can: Divide employees' user entries into groups based on the employees' manager, location, position or any other attribute. You can list all attributes by entering ipa user-add --help on the command-line. Divide hosts into groups based on their class, location, or any other attribute. You can list all attributes by entering ipa host-add --help on the command-line. Add all users or all hosts to a single global group. You can use Red Hat Ansible Engine to automate the management of automatic group membership in Identity Management (IdM). This section covers the following topics: Preparing your Ansible control node for managing IdM Using Ansible to ensure that an automember rule for an IdM user group is present Using Ansible to ensure that a condition is present in an IdM user group automember rule Using Ansible to ensure that a condition is absent in an IdM user group automember rule Using Ansible to ensure that an automember rule for an IdM group is absent Using Ansible to ensure that a condition is present in an IdM host group automember rule 27.1. Preparing your Ansible control node for managing IdM As a system administrator managing Identity Management (IdM), when working with Red Hat Ansible Engine, it is good practice to do the following: Create a subdirectory dedicated to Ansible playbooks in your home directory, for example ~/MyPlaybooks . Copy and adapt sample Ansible playbooks from the /usr/share/doc/ansible-freeipa/* and /usr/share/doc/rhel-system-roles/* directories and subdirectories into your ~/MyPlaybooks directory. Include your inventory file in your ~/MyPlaybooks directory. By following this practice, you can find all your playbooks in one place and you can run your playbooks without invoking root privileges. Note You only need root privileges on the managed nodes to execute the ipaserver , ipareplica , ipaclient , ipabackup , ipasmartcard_server and ipasmartcard_client ansible-freeipa roles. These roles require privileged access to directories and the dnf software package manager. Follow this procedure to create the ~/MyPlaybooks directory and configure it so that you can use it to store and run Ansible playbooks. Prerequisites You have installed an IdM server on your managed nodes, server.idm.example.com and replica.idm.example.com . You have configured DNS and networking so you can log in to the managed nodes, server.idm.example.com and replica.idm.example.com , directly from the control node. You know the IdM admin password. Procedure Create a directory for your Ansible configuration and playbooks in your home directory: Change into the ~/MyPlaybooks/ directory: Create the ~/MyPlaybooks/ansible.cfg file with the following content: Create the ~/MyPlaybooks/inventory file with the following content: This configuration defines two host groups, eu and us , for hosts in these locations. Additionally, this configuration defines the ipaserver host group, which contains all hosts from the eu and us groups. Optional: Create an SSH public and private key. To simplify access in your test environment, do not set a password on the private key: Copy the SSH public key to the IdM admin account on each managed node: You must enter the IdM admin password when you enter these commands. Additional resources Installing an Identity Management server using an Ansible playbook How to build your inventory 27.2. Using Ansible to ensure that an automember rule for an IdM user group is present The following procedure describes how to use an Ansible playbook to ensure an automember rule for an Identity Management (IdM) group exists. In the example, the presence of an automember rule is ensured for the testing_group user group. Prerequisites You know the IdM admin password. The testing_group user group exists in IdM. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Navigate to your ~/ MyPlaybooks / directory: Copy the automember-group-present.yml Ansible playbook file located in the /usr/share/doc/ansible-freeipa/playbooks/automember/ directory: Open the automember-group-present-copy.yml file for editing. Adapt the file by setting the following variables in the ipaautomember task section: Set the ipaadmin_password variable to the password of the IdM admin . Set the name variable to testing_group . Set the automember_type variable to group . Ensure that the state variable is set to present . This is the modified Ansible playbook file for the current example: Save the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Additional resources See Benefits of automatic group membership and Automember rules . See Using Ansible to ensure that a condition is present in an IdM user group automember rule . See the README-automember.md file in the /usr/share/doc/ansible-freeipa/ directory. See the /usr/share/doc/ansible-freeipa/playbooks/automember directory. 27.3. Using Ansible to ensure that a specified condition is present in an IdM user group automember rule The following procedure describes how to use an Ansible playbook to ensure that a specified condition exists in an automember rule for an Identity Management (IdM) group. In the example, the presence of a UID-related condition in the automember rule is ensured for the testing_group group. By specifying the .* condition, you ensure that all future IdM users automatically become members of the testing_group . Prerequisites You know the IdM admin password. The testing_group user group and automember user group rule exist in IdM. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Navigate to your ~/ MyPlaybooks / directory: Copy the automember-hostgroup-rule-present.yml Ansible playbook file located in the /usr/share/doc/ansible-freeipa/playbooks/automember/ directory and name it, for example, automember-usergroup-rule-present.yml : Open the automember-usergroup-rule-present.yml file for editing. Adapt the file by modifying the following parameters: Rename the playbook to correspond to your use case, for example: Automember user group rule member present . Rename the task to correspond to your use case, for example: Ensure an automember condition for a user group is present . Set the following variables in the ipaautomember task section: Set the ipaadmin_password variable to the password of the IdM admin . Set the name variable to testing_group . Set the automember_type variable to group . Ensure that the state variable is set to present . Ensure that the action variable is set to member . Set the inclusive key variable to UID . Set the inclusive expression variable to . * This is the modified Ansible playbook file for the current example: Save the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Verification Log in as an IdM administrator. Add a user, for example: Additional resources See Applying automember rules to existing entries using the IdM CLI . See Benefits of automatic group membership and Automember rules . See the README-automember.md file in the /usr/share/doc/ansible-freeipa/ directory. See the /usr/share/doc/ansible-freeipa/playbooks/automember directory. 27.4. Using Ansible to ensure that a condition is absent from an IdM user group automember rule The following procedure describes how to use an Ansible playbook to ensure a condition is absent from an automember rule for an Identity Management (IdM) group. In the example, the absence of a condition in the automember rule is ensured that specifies that users whose initials are dp should be included. The automember rule is applied to the testing_group group. By applying the condition, you ensure that no future IdM user whose initials are dp becomes a member of the testing_group . Prerequisites You know the IdM admin password. The testing_group user group and automember user group rule exist in IdM. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Navigate to your ~/ MyPlaybooks / directory: Copy the automember-hostgroup-rule-absent.yml Ansible playbook file located in the /usr/share/doc/ansible-freeipa/playbooks/automember/ directory and name it, for example, automember-usergroup-rule-absent.yml : Open the automember-usergroup-rule-absent.yml file for editing. Adapt the file by modifying the following parameters: Rename the playbook to correspond to your use case, for example: Automember user group rule member absent . Rename the task to correspond to your use case, for example: Ensure an automember condition for a user group is absent . Set the following variables in the ipaautomember task section: Set the ipaadmin_password variable to the password of the IdM admin . Set the name variable to testing_group . Set the automember_type variable to group . Ensure that the state variable is set to absent . Ensure that the action variable is set to member . Set the inclusive key variable to initials . Set the inclusive expression variable to dp . This is the modified Ansible playbook file for the current example: Save the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Verification Log in as an IdM administrator. View the automember group: The absence of an Inclusive Regex: initials=dp entry in the output confirms that the testing_group automember rule does not contain the condition specified. Additional resources See Applying automember rules to existing entries using the IdM CLI . See Benefits of automatic group membership and Automember rules . See the README-automember.md file in the /usr/share/doc/ansible-freeipa/ directory. See the /usr/share/doc/ansible-freeipa/playbooks/automember directory. 27.5. Using Ansible to ensure that an automember rule for an IdM user group is absent The following procedure describes how to use an Ansible playbook to ensure an automember rule is absent for an Identity Management (IdM) group. In the example, the absence of an automember rule is ensured for the testing_group group. Note Deleting an automember rule also deletes all conditions associated with the rule. To remove only specific conditions from a rule, see Using Ansible to ensure that a condition is absent in an IdM user group automember rule . Prerequisites On the control node: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Navigate to your ~/ MyPlaybooks / directory: Copy the automember-group-absent.yml Ansible playbook file located in the /usr/share/doc/ansible-freeipa/playbooks/automember/ directory: Open the automember-group-absent-copy.yml file for editing. Adapt the file by setting the following variables in the ipaautomember task section: Set the ipaadmin_password variable to the password of the IdM admin . Set the name variable to testing_group . Set the automember_type variable to group . Ensure that the state variable is set to absent . This is the modified Ansible playbook file for the current example: Save the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Additional resources See Benefits of automatic group membership and Automember rules . See the README-automember.md file in the /usr/share/doc/ansible-freeipa/ directory. See the /usr/share/doc/ansible-freeipa/playbooks/automember directory. 27.6. Using Ansible to ensure that a condition is present in an IdM host group automember rule Follow this procedure to use Ansible to ensure that a condition is present in an IdM host group automember rule. The example describes how to ensure that hosts with the FQDN of .*.idm.example.com are members of the primary_dns_domain_hosts host group and hosts whose FQDN is .*.example.org are not members of the primary_dns_domain_hosts host group. Prerequisites You know the IdM admin password. The primary_dns_domain_hosts host group and automember host group rule exist in IdM. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Navigate to your ~/ MyPlaybooks / directory: Copy the automember-hostgroup-rule-present.yml Ansible playbook file located in the /usr/share/doc/ansible-freeipa/playbooks/automember/ directory: Open the automember-hostgroup-rule-present-copy.yml file for editing. Adapt the file by setting the following variables in the ipaautomember task section: Set the ipaadmin_password variable to the password of the IdM admin . Set the name variable to primary_dns_domain_hosts . Set the automember_type variable to hostgroup . Ensure that the state variable is set to present . Ensure that the action variable is set to member . Ensure that the inclusive key variable is set to fqdn . Set the corresponding inclusive expression variable to .*.idm.example.com . Set the exclusive key variable to fqdn . Set the corresponding exclusive expression variable to .*.example.org . This is the modified Ansible playbook file for the current example: Save the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Additional resources See Applying automember rules to existing entries using the IdM CLI . See Benefits of automatic group membership and Automember rules . See the README-automember.md file in the /usr/share/doc/ansible-freeipa/ directory. See the /usr/share/doc/ansible-freeipa/playbooks/automember directory. 27.7. Additional resources Managing user accounts using Ansible playbooks Managing hosts using Ansible playbooks Managing user groups using Ansible playbooks Managing host groups using the IdM CLI
[ "mkdir ~/MyPlaybooks/", "cd ~/MyPlaybooks", "[defaults] inventory = /home/ your_username /MyPlaybooks/inventory [privilege_escalation] become=True", "[ipaserver] server.idm.example.com [ipareplicas] replica1.idm.example.com replica2.idm.example.com [ipacluster:children] ipaserver ipareplicas [ipacluster:vars] ipaadmin_password=SomeADMINpassword [ipaclients] ipaclient1.example.com ipaclient2.example.com [ipaclients:vars] ipaadmin_password=SomeADMINpassword", "ssh-keygen", "ssh-copy-id [email protected] ssh-copy-id [email protected]", "cd ~/ MyPlaybooks /", "cp /usr/share/doc/ansible-freeipa/playbooks/automember/automember-group-present.yml automember-group-present-copy.yml", "--- - name: Automember group present example hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure group automember rule admins is present ipaautomember: ipaadmin_password: \"{{ ipaadmin_password }}\" name: testing_group automember_type: group state: present", "ansible-playbook --vault-password-file=password_file -v -i inventory automember-group-present-copy.yml", "cd ~/ MyPlaybooks /", "cp /usr/share/doc/ansible-freeipa/playbooks/automember/automember-hostgroup-rule-present.yml automember-usergroup-rule-present.yml", "--- - name: Automember user group rule member present hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure an automember condition for a user group is present ipaautomember: ipaadmin_password: \"{{ ipaadmin_password }}\" name: testing_group automember_type: group state: present action: member inclusive: - key: UID expression: . *", "ansible-playbook --vault-password-file=password_file -v -i inventory automember-usergroup-rule-present.yml", "kinit admin", "ipa user-add user101 --first user --last 101 ----------------------- Added user \"user101\" ----------------------- User login: user101 First name: user Last name: 101 Member of groups: ipausers, testing_group", "cd ~/ MyPlaybooks /", "cp /usr/share/doc/ansible-freeipa/playbooks/automember/automember-hostgroup-rule-absent.yml automember-usergroup-rule-absent.yml", "--- - name: Automember user group rule member absent hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure an automember condition for a user group is absent ipaautomember: ipaadmin_password: \"{{ ipaadmin_password }}\" name: testing_group automember_type: group state: absent action: member inclusive: - key: initials expression: dp", "ansible-playbook --vault-password-file=password_file -v -i inventory automember-usergroup-rule-absent.yml", "kinit admin", "ipa automember-show --type=group testing_group Automember Rule: testing_group", "cd ~/ MyPlaybooks /", "cp /usr/share/doc/ansible-freeipa/playbooks/automember/automember-group-absent.yml automember-group-absent-copy.yml", "--- - name: Automember group absent example hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure group automember rule admins is absent ipaautomember: ipaadmin_password: \"{{ ipaadmin_password }}\" name: testing_group automember_type: group state: absent", "ansible-playbook --vault-password-file=password_file -v -i inventory automember-group-absent.yml", "cd ~/ MyPlaybooks /", "cp /usr/share/doc/ansible-freeipa/playbooks/automember/automember-hostgroup-rule-present.yml automember-hostgroup-rule-present-copy.yml", "--- - name: Automember user group rule member present hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure an automember condition for a user group is present ipaautomember: ipaadmin_password: \"{{ ipaadmin_password }}\" name: primary_dns_domain_hosts automember_type: hostgroup state: present action: member inclusive: - key: fqdn expression: .*.idm.example.com exclusive: - key: fqdn expression: .*.example.org", "ansible-playbook --vault-password-file=password_file -v -i inventory automember-hostgroup-rule-present-copy.yml" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_idm_users_groups_hosts_and_access_control_rules/using-ansible-to-automate-group-membership-in-idm_managing-users-groups-hosts
Red Hat OpenStack Certification Workflow Guide
Red Hat OpenStack Certification Workflow Guide Red Hat Software Certification 2025 For Use with Red Hat OpenStack 17 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_software_certification/2025/html/red_hat_openstack_certification_workflow_guide/index
Providing feedback on Red Hat build of OpenJDK documentation
Providing feedback on Red Hat build of OpenJDK documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/using_jlink_to_customize_java_runtime_environment/proc-providing-feedback-on-redhat-documentation
Chapter 7. Compiler and Tools
Chapter 7. Compiler and Tools pcp rebased to version 3.11.8 The Performance Co-Pilot application (PCP) has been upgraded to upstream version 3.11.81, which provides a number of bug fixes and enhancements over the version. Notable enhancements include: A new client tool pcp2influxdb has been added to allow export of performance metric values to the influxdb database. New client tools pcp-mpstat and pcp-pidstat have been added to allow retrospective analysis of mpstat and pidstat values. New performance metrics have been added for device mapper, Ceph devices, cpusched cgroups, per-processor soft IRQs, buddyinfo , zoneinfo , shared memory, libvirt , same-page-sharing, lio , Redis , and Docker . Additional performance metrics from several subsystems are now available for a variety of PCP analysis tools. (BZ# 1423020 ) systemtap rebased to version 3.1 The systemtap package has been upgraded to upstream version 3.1, which provides a number of bug fixes and enhancements over the version. Notable changes include: The probes for system calls no longer default to being based on debuginfo information. Support for probing Python functions has been added. Access to Java function parameters has been made more uniform. The performance of statistical aggregate variables has been improved. A new statistics operator @variance has been added. More options for fetching and setting user-space values have been added. NFS monitoring has been improved with sample scripts and tapset compatibility fixes. (BZ# 1398393 , BZ# 1416204 , BZ#1433391) valgrind rebased to version 3.12 The valgrind package has been upgraded to upstream version 3.12, which provides a number of bug fixes and enhancements over the version. Notable changes include: A new option --ignore-range-below-sp has been added to the memcheck tool to ignore memory accesses below the stack pointer. This is a generic replacement for the now deprecated option --workaround-gcc296-bugs=yes . The maximum number of callers in a suppression entry generated by the --gen-suppressions=yes option is now equal to the value given by the --num-callers option. The cost of instrumenting code blocks for the most common use case, the memcheck tool on the AMD64 and Intel 64 architectures, has been reduced. Performance has been improved for debugging programs which discard a lot of instruction address ranges of 8KB or less. Support for IBM Power 9 (ISA 3.0) architecture has been added. Partial support for AMD FMA4 instructions has been added. Support for cryptographic and CRC instructions on the 64-bit ARM architecture version 8 has been added. (BZ# 1391217 ) New package: unitsofmeasurement The unitsofmeasurement package enables expressing units of measurement in Java code. With the new API for units of measurement, handling of physical quantities is easier and less error-prone. The package's API is efficient in use of memory and resources. (BZ#1422263) SSL/TLS certificate verification for HTTP clients is now enabled by default in the Python standard library The default global setting for HTTP clients has been changed in the Python standard library to verify SSL/TLS certificates by default. Customers using the file-based configuration are not affected. For details, see https://access.redhat.com/articles/2039753 . (BZ#1219110) Support for %gemspec_add_dep and %gemspec_remove_dep has been added This update adds support for the %gemspec_add_dep and %gemspec_remove_dep macros. These macros allow easier adjustment of rubygem-* package dependencies. In addition, all current macros have been extended to improve support for pre-release version of packages. (BZ# 1397390 ) ipmitool rebased to version 1.8.18 The ipmitool package has been upgraded to upstream version 1.8.18, which provides a number of bug fixes and enhancements over the version. Notable changes include: The PEF user interface has been redesigned A new subcommand lan6 has been added for IP version 6 local area network parameters Support for VITA-specific sensor types and events has been added Support for HMAC_MD5 and HMAC_SHA256 encryption has been added Support for checking PICMG extension 5.x has been added Support for USB medium as a new communication interface has been added The USB driver has been enabled by default for GNU Linux systems (BZ#1398658) lshw updated for the little-endian variant of IBM Power The lshw packages, which provide detailed information on the hardware configuration of a machine, have been updated for the little-endian variant of IBM Power System. (BZ#1368704) perf now supports uncore events on Intel Xeon v5 With this update, Performance analysis tool for Linux (perf) has been updated to support uncore events on Intel Xeon v5 server CPU. These events provide additional performance monitoring information for advanced users. (BZ#1355919) dmidecode updated The dmidecode package has been updated to a later version, which provides several bug fixes and hardware enablement improvements. (BZ#1385884) iSCSI now supports configuring the ALUA operation by using targetcli With multiple paths from the initiator to a target, you can use Asymmetric Logical Unit Assignment (ALUA) to configure preferences for how to use the paths in a non-uniform, preferential way. The Linux-IO (LIO) kernel target has always supported this feature. With this update, you can use the targetcli command shell to configure the ALUA operation. (BZ#1243410) jansson rebased to version 2.10 The jansson library has been updated to version 2.10, which provides several bug fixes and enhancements over the version. Notably, interfaces have been added to support the clevis , tang and jose applications. (BZ# 1389805 ) A new compatibility environmental variable for egrep and fgrep In an earlier grep rebase, the egrep and fgrep commands were replaced by grep -E and grep -F respectively. This change could affect customer scripts because only grep was shown in the outupt of the ps command. To prevent such problems, this update introduces a new compatibility environmental variable: GREP_LEGACY_EGREP_FGREP_PS . To preserve showing egrep and fgrep in ps output, set the variable to 1: (BZ#1297441) lastcomm now supports the --pid option The lastcomm command now supports the --pid option. This option shows the process ID (PID) and parent-process ID (PPID) for each record if supported by the kernel. (BZ# 1255183 ) New package: perl-Perl4-CoreLibs A new perl-Perl4-CoreLibs package is now available in the Base channel of Red Hat Enterprise Linux 7. This package contains libraries that were previously available in Perl 4 but were removed from Perl 5.16, which is distributed with Red Hat Enterprise Linux 7. In the release, these libraries were provided in a Perl subpackage through the Optional channel. (BZ#1366724) tar now follows symlinks to directories when extracting from the archive This update adds the --keep-directory-symlink option to the tar command. This option changes the behavior of tar when it encounters a symlink with the same name as the directory that it is about to extract. By default, tar would first remove the symlink and then proceed extracting the directory. The --keep-directory-symlink option disables this behavior and instructs tar to follow symlinks to directories when extracting from the archive. (BZ#1350640) The IO::Socket::SSL Perl module now supports restricting of TLS version The Net:SSLeay Perl module has been updated to support explicit specification of the TLS protocol versions 1.1 or 1.2 to improve security, and the IO::Socket::SSL module has been updated accordingly. When a new IO::Socket::SSL object is created, it is now possible to restrict the TLS version to 1.1 or 1.2 by setting the SSL_version option to TLSv1_1 or TLSv1_2 respectively. Alternatively, TLSv11 and TLSv12 can be used. Note that these values are case-sensitive. (BZ# 1335035 ) The Net:SSLeay Perl module now supports restricting of TLS version The Net:SSLeay Perl module has been updated to support explicit specification of the TLS protocol version, which can be used for improving security. To restrict TLS version to 1.1 or 1.2, set the Net::SSLeay::ssl_version variable to 11 or 12 respectively. (BZ# 1335028 ) wget now supports specification of the TLS protocol version Previously, the wget utility used the highest TLS protocol version 1.2 by default when connecting to a remote server. With this update, wget has been enhanced to allow the user to explicitly select the TLS protocol minor version by adding the --secure-protocol=TLSv1_1 or --secure-protocol=TLSv1_2 command-line options to the wget command. (BZ#1439811) tcpdump rebased to version 4.9.0 The tcpdump package has been upgraded to upstream version 4.9.0, which provides a number of bug fixes and enhancements over the version. Notable changes include: Many security vulnerabilities have been fixed Numerous improvements have been made in the dissection of popular network protocols The default snaplen feature has been increased to 262144 bytes The capture buffer has been enlarged to 4 MiB (BZ#1422473) The option to set capture direction for tcpdump changed from -P to -Q Previously, the tcpdump utility in Red Hat Enterprise Linux used the -P option to set the capture direction, while the upstream version used -Q . The -Q option has been implemented and is now preferred. The -P option retains the function as an alias of -Q , but displays a warning. (BZ# 1292056 ) OpenJDK now supports SystemTap on the 64-bit ARM architecture The OpenJDK platform now supports introspection with the SystemTap instrumentation tool on the 64-bit ARM architecture. (BZ#1373986) sos rebased to version 3.4 The sos package has been updated to upstream version 3.4, which provides a number of enhancements, new features, and bug fixes, including: New plug-ins have been added for ceph_ansible , collectd , crypto , dracut , gnocchi , jars , nfsganesha , nodejs , npm , openstack_ansible , openstack_instack , openstack_manila , salt , saltmaster , and storageconsole API plug-in enhancements Internationalisation updates The networking plug-in no longer crashes when a network name contains the single quote character ' The foreman-debug plug-in is now run with a longer timeout to prevent incomplete foreman-debug information collected Certain private SSL certificate files are no longer collected (BZ# 1414879 ) targetd rebased to version 0.8.6 The targetd packages have been upgraded to upstream version 0.8.6, which provides a number of bug fixes and enhancements over the version. Notably, the targetd service now runs on either Python 2 or Python 3 run time, and the following APIs have been added: initiator_list , access_group_list , access_group_create , access_group_destroy , access_group_init_add , access_group_init_del , access_group_map_list , access_group_map_create , and access_group_map_destroy . Notable bug fixes include: targetd is now compliant with JSON-RPC response version 2.0. the export_create API can now be used to map the same LUN to multiple initiators. targetd now ensures that SSL certificates are present at start-up. (BZ# 1162381 ) shim rebased to version 12-1 With this update, the shim package has been upgraded to upstream version 12-1, which provides a number of bug fixes and enhancements over the version. Notably, the support for 32-bit UEFI firmware and Extensible Firmware Interface (EFI) utilities has been added. (BZ#1310766) rubygem-abrt rebased to version 0.3.0 The rubygem-abrt package has been rebased to version 0.3.0, which provides several bug fixes and enhancements over the version. Notably: The Ruby ABRT handler now supports uReports , automatic anonymous microreports. With uReports enabled, developers are promptly notified about application issues and are able to fix bugs and resolve problems faster. Previously, when a Ruby application was using Bundler to manage its dependencies and an error occurred, an incorrect logic was used to load components of the Ruby ABRT handler. Consequently, an unexpected LoadReport error was reported to the user instead of a proper ABRT report. The loading logic has been fixed, and the Ruby application errors are now correctly handled and reported using ABRT . (BZ# 1418750 ) New package: http-parser The new http-parser package provides a utility for parsing HTTP messages. It parses both requests and responses. The parser is designed to be used in applications managing HTTP performance. It does not make any syscalls or allocations, it does not buffer data, and it can be interrupted at any time. Depending on your architecture, it only requires about 40 bytes of data per message stream. (BZ#1393819) Intel and IBM POWER transactional memory support for all default POSIX mutexes The default POSIX mutexes can be transparently substituted with Intel and IBM POWER transactional memory support, which significantly reduces the cost of lock acquisition. To enable transactional memory support for all default POSIX mutexes, set the RHEL_GLIBC_TUNABLES=glibc.elision.enable environment variable to 1 . As a result, performance of some applications can be improved. Developers are advised to use profiling to decide whether enabling of this feature improves performance for their applications. (BZ# 841653 , BZ#731835) glibc now supports group merging The ability to merge group members from different Name Service modules has been added to glibc . As a result, management of centralized user access control and group membership across multiple hosts is now easier. (BZ# 1298975 ) glibc now supports optimized string comparison functions on The IBM POWER9 architecture The string comparison functions strcmp and strncmp from the glibc library have been optimized for the IBM POWER9 architecture. (BZ#1320947) Improved performance for dynamically loaded libraries using the Intel SSE, AVX and AVX512 features Dynamic library loading has been updated for libraries using the Intel SSE, AVX, and AVX512 features. As a result, performance while loading these libraries has improved. Additionally, support for LD_AUDIT-style auditing has been added. (BZ# 1421155 ) elfutils rebased to version 0.168 The elfutils package has been upgraded to upstream version 0.168, which provides a number of bug fixes and enhancements: The option --symbols of the eu-readelf utility now allows selecting the section for displaying symbols. New functions for the creation of ELF/DWARF string tables have been added to the libdw library. The DW_LANG_PL1 constant has been changed to DW_LANG_PLI . The name is still accepted. The return type of the gelf_newehdr and gelf_newphdr functions from the libelf library has been changed to void* for source compatibility with other libelf implementations. This change retains binary compatibility on all platforms supported by Red Hat Enterprise Linux. (BZ# 1400302 ) bison rebased to version 3.0.4 The bison package has been upgraded to upstream version 3.0.4, which provides a number of bug fixes and enhancements: Endless diagnostics caused by caret errors have been fixed. The -Werror=CATEGORY option has been added to treat specified warnings as errors. The warnings do not have to be explicitly activated using the -W option. Many improvements in handling of precedence rules and useless rules. Additionally, the following changes breaking backward compatibility have been introduced: The following features have been deprecated: YYFAIL , YYLEX_PARAM , YYPARSE_PARAM , yystype , yyltype Missing semicolons at the end of actions are no longer automatically added. To use Bison extensions with the autoconf utility versions 2.69 and earlier, pass the option -Wno-yacc to (AM_)YFLAGS . (BZ# 1306000 ) The system default CA bundle has been set as default in the compiled-in default setting or configuration in Mutt Previously, when connecting to a new system via TLS/SSL, the Mutt email client required the user to save the certificate. With this update, the system Certificate Authority (CA) bundle is set in Mutt by default. As a result, Mutt now connects via SSL/TLS to hosts with a valid certificate without prompting the user to approve or reject the certificate. (BZ# 1388511 ) objdump mixed listing speed up Previously, the BFD library for parsing DWARF debug information and locating source code was very slow. The BFD library is used by the objdump tool. As a consequence, objdump became significantly slower when producing a mixed listing of source code and disassembly. Performance of the BFD library has been improved. As a result, producing a mixed listing with objdump is faster. (BZ# 1366052 ) ethtool support for human readable output from the fjes driver The ethtool utility has been enhanced to provide a human readable form of register dump output from the fjes driver. As a result, users of ethtool can inspect the Fujitsu Extended Socket Network Device driver more comfortably. (BZ#1402701) ecj rebased to version 4.5.2 The ecj package has been upgraded to upstream version 4.5.2, which provides a number of bug fixes and enhancements over the version. Notably, support for features added to the Java language in version 8 has been completed. As a result, compilation of Java code using Java 8 features no longer fails. This includes cases where code not using Java 8 features referenced code using these features, such as system classes provided by the Java Runtime Environment. (BZ# 1379855 ) rhino rebased to version 1.7R5 The rhino package has been upgraded to upstream version 1.7R5, which provides a number of bug fixes and enhancements over the version. Notably, the former problem with an infinite loop while parsing regular expressions has been fixed. Applications using Rhino that previously encountered this bug now function correctly. (BZ# 1350331 ) scap-security-guide and oscap-docker now support containers The user can now use the oscap-docker utility and the SCAP Security Guide to assess compliance of container or container image without encountering false positive results. Tests that make no sense in container context, such as partitioning, has been set to the not applicable value, and containers can be now scanned with a selected security policy. (BZ# 1404392 )
[ "GREP_LEGACY_EGREP_FGREP_PS=1" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.4_release_notes/new_features_compiler_and_tools
7.270. virt-who
7.270. virt-who 7.270.1. RHBA-2013:0374 - virt-who bug fix and enhancement update Updated virt-who packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The virt-who packages provide an agent that collects information about virtual guests present in the system and reports them to the Red Hat Subscription Manager tool. Bug Fixes BZ# 825215 Previously, when running the virt-who service, unregistering a Red Hat Enterprise Virtualization Hypervisor host from the Subscription Asset Manager (SAM) server caused the service to be terminated with the following message: Only the last line of the aforementioned message should have been displayed. This bug has been fixed, and the traceback errors are now saved to the log file and not printed on the screen. BZ# 866890 When a snapshot of a virtual machine (VM) was created in Microsoft Hyper-V Server, the virt-who agent replaced the UUID of the VM file with the UUID of the snapshot. This bug has been fixed, and the UUID is not changed in the described case. Additionally, in certain cases, the virt-who agent running with the "--hyperv" command-line option terminated with the following message: This bug has been fixed and the aforementioned error no longer occurs. BZ# 869960 Previously, the virt-who agent failed to function correctly when a URL, which was set in the VIRTWHO_ESX_SERVER parameter, was missing the initial "https://" string. With this update, virt-who has been modified, and "https://" is no longer required in VIRTWHO_ESX_SERVER. Enhancements BZ# 808060 With this update, the virt-who agent has been modified to start as a foreground process and to print error messages or debugging output (the "-d" command-line option) to the standard error output. Moreover, the following command-line options have been enhanced: the "-o" option provides the one-shot mode and exits after sending the list of guests; the "-b" option and the "service virt-who start" command equivalently start on the background and send data to the /var/log/ directory. BZ# 846788 The virt-who agent has been modified to support Red Hat Enterprise Virtualization Manager polling. BZ#860854 With this update, the virt-who agent has been modified to correctly recognize guest virtual machines, which are installed on top of Microsoft Hyper-V Server. BZ# 868149 The virt-who manual pages and the output of the "virt-who --help" command have been enhanced with clarifying information. In addition, a typographical error has been corrected in both texts. All users of virt-who are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
[ "SubscriptionManagerError: No such file or directory Error in communication with candlepin, trying to recover Unable to read certificate, system is not registered or you are not root", "AttributeError: HyperV instance has no attribute 'ping'" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/virt-who
Appendix A. Composable service parameters
Appendix A. Composable service parameters The following parameters are used for the outputs in all composable services: service_name config_settings service_config_settings global_config_settings step_config upgrade_tasks upgrade_batch_tasks The following parameters are used for the outputs specifically for containerized composable services: puppet_config kolla_config docker_config container_puppet_tasks host_prep_tasks fast_forward_upgrade_tasks A.1. All composable services The following parameters apply to all composable services. service_name The name of your service. You can use this to apply configuration from other composable services with service_config_settings . config_settings Custom hieradata settings for your service. service_config_settings Custom hieradata settings for another service. For example, your service might require its endpoints registered in OpenStack Identity ( keystone ). This provides parameters from one service to another and provide a method of cross-service configuration, even if the services are on different roles. global_config_settings Custom hieradata settings distributed to all roles. step_config A Puppet snippet to configure the service. This snippet is added to a combined manifest created and run at each step of the service configuration process. These steps are: Step 1 - Load balancer configuration Step 2 - Core high availability and general services (Database, RabbitMQ, NTP) Step 3 - Early OpenStack Platform Service setup (Storage, Ring Building) Step 4 - General OpenStack Platform services Step 5 - Service activation (Pacemaker) and OpenStack Identity (keystone) role and user creation In any referenced puppet manifest, you can use the step hieradata (using hiera('step') ) to define specific actions at specific steps during the deployment process. upgrade_tasks Ansible snippet to help with upgrading the service. The snippet is added to a combined playbook. Each operation uses a tag to define a step , which includes: common - Applies to all steps step0 - Validation step1 - Stop all OpenStack services. step2 - Stop all Pacemaker-controlled services step3 - Package update and new package installation step4 - Start OpenStack service required for database upgrade step5 - Upgrade database upgrade_batch_tasks Performs a similar function to upgrade_tasks but only executes batch set of Ansible tasks in order they are listed. The default is 1 , but you can change this per role using the upgrade_batch_size parameter in a roles_data.yaml file. A.2. Containerized composable services The following parameters apply to all containerized composable services. puppet_config This section is a nested set of key value pairs that drive the creation of configuration files using puppet. Required parameters include: puppet_tags Puppet resource tag names that are used to generate configuration files with Puppet. Only the named configuration resources are used to generate a file. Any service that specifies tags will have the default tags of file,concat,file_line,augeas,cron appended to the setting. Example: keystone_config config_volume The name of the volume (directory) where the configuration files are generated for this service. Use this as the location to bind mount into the running Kolla container for configuration. config_image The name of the container image that will be used for generating configuration files. This is often the same container that the runtime service uses. Some services share a common set of configuration files which are generated in a common base container. step_config This setting controls the manifest that is used to create configuration files with Puppet. Use the following Puppet tags together with the manifest to generate a configuration directory for this container. kolla_config Creates a map of the the kolla configuration in the container. The format begins with the absolute path of the configuration file and then uses for following sub-parameters: command The command to run when the container starts. config_files The location of the service configuration files ( source ) and the destination on the container ( dest ) before the service starts. Also includes options to either merge or replace these files on the container ( merge ), whether to preserve the file permissions and other properties ( preserve_properties ). permissions Set permissions for certain directories on the containers. Requires a path and an owner (and group). You can also apply the permissions recursively ( recurse ). The following is an example of the kolla_config paramaeter for the keystone service: docker_config Data passed to the paunch command to configure containers at each step. step_0 - Containers configuration files generated per hiera settings. step_1 - Load Balancer configuration Baremetal configuration Container configuration step_2 - Core Services (Database/Rabbit/NTP/etc.) Baremetal configuration Container configuration step_3 - Early OpenStack Service setup (Ringbuilder, etc.) Baremetal configuration Container configuration step_4 - General OpenStack Services Baremetal configuration Container configuration Keystone container post initialization (tenant, service, endpoint creation) step_5 - Service activation (Pacemaker) Baremetal configuration Container configuration The YAML uses a set of parameters to define the container container to run at each step and the podman settings associated with each container. For example: This creates a keystone container and uses the respective parameters to define details like the image to use, the networking type, and environment variables. container_puppet_tasks Provides data to drive the container-puppet.py tool directly. The task is executed only once within the cluster (not on each node) and is useful for several Puppet snippets required for initialization of things like keystone endpoints and database users. For example: host_prep_tasks Ansible snippet to execute on the node host to prepare it for containerized services. For example, you might need to create a specific directory to mount to the container during its creation. fast_forward_upgrade_tasks Ansible snippet to help with the fast forward upgrade process. This snippet is added to a combined playbook. Each operation uses tags to define step and release The step usually follows these stages: step=0 - Check running services step=1 - Stop the service step=2 - Stop the cluster step=3 - Update repositories step=4 - Database backups step=5 - Pre-package update commands step=6 - Package updates step=7 - Post-package update commands step=8 - Database updates step=9 - Verification The tag corresponds to a release: tag=ocata - OpenStack Platform 11 tag=pike - OpenStack Platform 12 tag=queens - OpenStack Platform 13
[ "kolla_config: /var/lib/kolla/config_files/keystone.json: command: /usr/sbin/httpd -DFOREGROUND config_files: - source: \"/var/lib/kolla/config_files/src/*\" dest: \"/\" merge: true preserve_properties: true /var/lib/kolla/config_files/keystone_cron.json: command: /usr/sbin/crond -n config_files: - source: \"/var/lib/kolla/config_files/src/*\" dest: \"/\" merge: true preserve_properties: true permissions: - path: /var/log/keystone owner: keystone:keystone recurse: true", "docker_config: step_3: keystone: start_order: 2 image: *keystone_image net: host privileged: false restart: always healthcheck: test: /openstack/healthcheck volumes: *keystone_volumes environment: - KOLLA_CONFIG_STRATEGY=COPY_ALWAYS", "container_puppet_tasks: # Keystone endpoint creation occurs only on single node step_3: config_volume: 'keystone_init_tasks' puppet_tags: 'keystone_config,keystone_domain_config,keystone_endpoint,keystone_identity_provider,keystone_paste_ini,keystone_role,keystone_service,keystone_tenant,keystone_user,keystone_user_role,keystone_domain' step_config: 'include ::tripleo::profile::base::keystone' config_image: *keystone_config_image" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/partner_integration/composable-service-parameters
Appendix A. Primitive types
Appendix A. Primitive types This section describes the primitive data types supported by the API. A.1. String primitive A finite sequence of Unicode characters. A.2. Boolean primitive Represents the false and true concepts used in mathematical logic. The valid values are the strings false and true . Case is ignored by the engine, so for example False and FALSE also valid values. However the server will always return lower case values. For backwards compatibility with older versions of the engine, the values 0 and 1 are also accepted. The value 0 has the same meaning than false , and 1 has the same meaning than true . Try to avoid using these values, as support for them may be removed in the future. A.3. Integer primitive Represents the mathematical concept of integer number. The valid values are finite sequences of decimal digits. Currently the engine implements this type using a signed 32 bit integer, so the minimum value is -2 31 (-2147483648) and the maximum value is 2 31 -1 (2147483647). However, there are some attributes in the system where the range of values possible with 32 bit isn't enough. In those exceptional cases the engine uses 64 bit integers, in particular for the following attributes: Disk.actual_size Disk.provisioned_size GlusterClient.bytes_read GlusterClient.bytes_written Host.max_scheduling_memory Host.memory HostNic.speed LogicalUnit.size MemoryPolicy.guaranteed NumaNode.memory QuotaStorageLimit.limit StorageDomain.available StorageDomain.used StorageDomain.committed VmBase.memory For these exception cases the minimum value is -2 63 (-9223372036854775808) and the maximum value is 2 63 -1 (9223372036854775807). Note In the future the integer type will be implemented using unlimited precission integers, so the above limitations and exceptions will eventually disappear. A.4. Decimal primitive Represents the mathematical concept of real number. Currently the engine implements this type using 32 bit IEEE 754 single precision floating point numbers. For some attributes this isn't enough precision. In those exceptional cases the engine uses 64 bit double precision floating point numbers, in particular for the following attributes: QuotaStorageLimit.usage QuotaStorageLimit.memory_limit QuotaStorageLimit.memory_usage Note In the future the decimal type will be implemented using unlimited precision decimal numbers, so the above limitations and exceptions will eventually disappear. A.5. Date primitive Represents a date and time. The format returned by the engine is the one described in the XML Schema specification when requesting XML. For example, if you send a request like this to retrieve the XML representation of a virtual machine: The response body will contain the following XML document: <vm id="123" href="/ovirt-engine/api/vms/123"> ... <creation_time>2016-09-08T09:53:35.138+02:00</creation_time> ... </vm> When requesting the JSON representation the engine uses a different, format: an integer containing the number of seconds since Jan 1 st 1970, also know as epoch time . For example, if you send a request like this to retrieve the JSON representation of a virtual machine: The response body will contain the following JSON document: { "id": "123", "href="/ovirt-engine/api/vms/123", ... "creation_time": 1472564909990, ... } Note In both cases, the dates returned by the engine use the time zone configured in the server where it is running, in the above examples it is UTC+2.
[ "GET /ovirt-engine/api/vms/123 Accept: application/xml", "<vm id=\"123\" href=\"/ovirt-engine/api/vms/123\"> <creation_time>2016-09-08T09:53:35.138+02:00</creation_time> </vm>", "GET /ovirt-engine/api/vms/123 Accept: application/json", "{ \"id\": \"123\", \"href=\"/ovirt-engine/api/vms/123\", \"creation_time\": 1472564909990, }" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/rest_api_guide/documents-a01_primitive_types
Chapter 1. Introduction to InfiniBand and RDMA
Chapter 1. Introduction to InfiniBand and RDMA InfiniBand refers to two distinct things: The physical link-layer protocol for InfiniBand networks The InfiniBand Verbs API, an implementation of the remote direct memory access (RDMA) technology RDMA provides access between the main memory of two computers without involving an operating system, cache, or storage. By using RDMA, data transfers with high-throughput, low-latency, and low CPU utilization. In a typical IP data transfer, when an application on one machine sends data to an application on another machine, the following actions happen on the receiving end: The kernel must receive the data. The kernel must determine that the data belongs to the application. The kernel wakes up the application. The kernel waits for the application to perform a system call into the kernel. The application copies the data from the internal memory space of the kernel into the buffer provided by the application. This process means that most network traffic is copied across the main memory of the system if the host adapter uses direct memory access (DMA) or otherwise at least twice. Additionally, the computer executes some context switches to switch between the kernel and application. These context switches can cause a higher CPU load with high traffic rates while slowing down the other tasks. Unlike traditional IP communication, RDMA communication bypasses the kernel intervention in the communication process. This reduces the CPU overhead. After a packet enters a network, the RDMA protocol enables the host adapter to decide which application should receive it and where to store it in the memory space of that application. Instead of sending the packet for processing to the kernel and copying it into the memory of the user application, the host adapter directly places the packet contents in the application buffer. This process requires a separate API, the InfiniBand Verbs API, and applications need to implement the InfiniBand Verbs API to use RDMA. Red Hat Enterprise Linux supports both the InfiniBand hardware and the InfiniBand Verbs API. Additionally, it supports the following technologies to use the InfiniBand Verbs API on non-InfiniBand hardware: iWARP: A network protocol that implements RDMA over IP networks RDMA over Converged Ethernet (RoCE), which is also known as InfiniBand over Ethernet (IBoE): A network protocol that implements RDMA over Ethernet networks
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_infiniband_and_rdma_networks/understanding-infiniband-and-rdma_configuring-infiniband-and-rdma-networks
Observability overview
Observability overview OpenShift Container Platform 4.12 Contains information about CI/CD for OpenShift Container Platform Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html-single/observability_overview/index
Chapter 15. Using the Red Hat Quay API
Chapter 15. Using the Red Hat Quay API Red Hat Quay provides a full OAuth 2 , RESTful API that: Is available from endpoints of each Red Hat Quay instance from the URL https://<yourquayhost>/api/v1 Lets you connect to endpoints, via a browser, to get, delete, post, and put Red Hat Quay settings by enabling the Swagger UI Can be accessed by applications that make API calls and use OAuth tokens Sends and receives data as JSON The following text describes how to access the Red Hat Quay API and use it to view and modify setting in your Red Hat Quay cluster. The section lists and describes API endpoints. 15.1. Accessing the Quay API from Quay.io If you don't have your own Red Hat Quay cluster running yet, you can explore the Red Hat Quay API available from Quay.io from your web browser: The API Explorer that appears shows Quay.io API endpoints. You will not see superuser API endpoints or endpoints for Red Hat Quay features that are not enabled on Quay.io (such as Repository Mirroring). From API Explorer, you can get, and sometimes change, information on: Billing, subscriptions, and plans Repository builds and build triggers Error messages and global messages Repository images, manifests, permissions, notifications, vulnerabilities, and image signing Usage logs Organizations, members and OAuth applications User and robot accounts and more... Select to open an endpoint to view the Model Schema for each part of the endpoint. Open an endpoint, enter any required parameters (such as a repository name or image), then select the Try it out! button to query or change settings associated with a Quay.io endpoint. 15.2. Creating an OAuth access token OAuth access tokens are credentials that allow you to access protected resources in a secure manner. With Red Hat Quay, you must create an OAuth access token before you can access the API endpoints of your organization. Use the following procedure to create an OAuth access token. Prerequisites You have logged in to Red Hat Quay as an administrator. Procedure On the main page, select an Organization. In the navigation pane, select Applications . Click Create New Application and provide a new application name, then press Enter . On the OAuth Applications page, select the name of your application. Optional. Enter the following information: Application Name Homepage URL Description Avatar E-mail Redirect/Callback URL prefix In the navigation pane, select Generate Token . Check the boxes for the following options: Administer Organization Administer Repositories Create Repositories View all visible repositories Read/Write to any accessible repositories Super User Access Administer User Read User Information Click Generate Access Token . You are redirected to a new page. Review the permissions that you are allowing, then click Authorize Application . Confirm your decision by clicking Authorize Application . You are redirected to the Access Token page. Copy and save the access token. Important This is the only opportunity to copy and save the access token. It cannot be reobtained after leaving this page. 15.3. Accessing your Quay API from a web browser By enabling Swagger, you can access the API for your own Red Hat Quay instance through a web browser. This URL exposes the Red Hat Quay API explorer via the Swagger UI and this URL: That way of accessing the API does not include superuser endpoints that are available on Red Hat Quay installations. Here is an example of accessing a Red Hat Quay API interface running on the local system by running the swagger-ui container image: With the swagger-ui container running, open your web browser to localhost port 8888 to view API endpoints via the swagger-ui container. To avoid errors in the log such as "API calls must be invoked with an X-Requested-With header if called from a browser," add the following line to the config.yaml on all nodes in the cluster and restart Red Hat Quay: 15.4. Accessing the Red Hat Quay API from the command line You can use the curl command to GET, PUT, POST, or DELETE settings via the API for your Red Hat Quay cluster. Replace <token> with the OAuth access token you created earlier to get or change settings in the following examples. 15.4.1. Get superuser information For example: USD curl -X GET -H "Authorization: Bearer mFCdgS7SAIoMcnTsHCGx23vcNsTgziAa4CmmHIsg" http://quay-server:8080/api/v1/superuser/users/ | jq { "users": [ { "kind": "user", "name": "quayadmin", "username": "quayadmin", "email": "[email protected]", "verified": true, "avatar": { "name": "quayadmin", "hash": "357a20e8c56e69d6f9734d23ef9517e8", "color": "#5254a3", "kind": "user" }, "super_user": true, "enabled": true } ] } 15.4.2. Creating a superuser using the API Configure a superuser name, as described in the Deploy Quay book: Use the configuration editor UI or Edit the config.yaml file directly, with the option of using the configuration API to validate (and download) the updated configuration bundle Create the user account for the superuser name: Obtain an authorization token as detailed above, and use curl to create the user: The returned content includes a generated password for the new user account: { "username": "quaysuper", "email": "[email protected]", "password": "EH67NB3Y6PTBED8H0HC6UVHGGGA3ODSE", "encrypted_password": "fn37AZAUQH0PTsU+vlO9lS0QxPW9A/boXL4ovZjIFtlUPrBz9i4j9UDOqMjuxQ/0HTfy38goKEpG8zYXVeQh3lOFzuOjSvKic2Vq7xdtQsU=" } Now, when you request the list of users , it will show quaysuper as a superuser: USD curl -X GET -H "Authorization: Bearer mFCdgS7SAIoMcnTsHCGx23vcNsTgziAa4CmmHIsg" http://quay-server:8080/api/v1/superuser/users/ | jq { "users": [ { "kind": "user", "name": "quayadmin", "username": "quayadmin", "email": "[email protected]", "verified": true, "avatar": { "name": "quayadmin", "hash": "357a20e8c56e69d6f9734d23ef9517e8", "color": "#5254a3", "kind": "user" }, "super_user": true, "enabled": true }, { "kind": "user", "name": "quaysuper", "username": "quaysuper", "email": "[email protected]", "verified": true, "avatar": { "name": "quaysuper", "hash": "c0e0f155afcef68e58a42243b153df08", "color": "#969696", "kind": "user" }, "super_user": true, "enabled": true } ] } 15.4.3. List usage logs An intrnal API, /api/v1/superuser/logs , is available to list the usage logs for the current system. The results are paginated, so in the following example, more than 20 repos were created to show how to use multiple invocations to access the entire result set. 15.4.3.1. Example for pagination First invocation USD curl -X GET -k -H "Authorization: Bearer qz9NZ2Np1f55CSZ3RVOvxjeUdkzYuCp0pKggABCD" https://example-registry-quay-quay-enterprise.apps.example.com/api/v1/superuser/logs | jq Initial output { "start_time": "Sun, 12 Dec 2021 11:41:55 -0000", "end_time": "Tue, 14 Dec 2021 11:41:55 -0000", "logs": [ { "kind": "create_repo", "metadata": { "repo": "t21", "namespace": "namespace1" }, "ip": "10.131.0.13", "datetime": "Mon, 13 Dec 2021 11:41:16 -0000", "performer": { "kind": "user", "name": "user1", "is_robot": false, "avatar": { "name": "user1", "hash": "5d40b245471708144de9760f2f18113d75aa2488ec82e12435b9de34a6565f73", "color": "#ad494a", "kind": "user" } }, "namespace": { "kind": "org", "name": "namespace1", "avatar": { "name": "namespace1", "hash": "6cf18b5c19217bfc6df0e7d788746ff7e8201a68cba333fca0437e42379b984f", "color": "#e377c2", "kind": "org" } } }, { "kind": "create_repo", "metadata": { "repo": "t20", "namespace": "namespace1" }, "ip": "10.131.0.13", "datetime": "Mon, 13 Dec 2021 11:41:05 -0000", "performer": { "kind": "user", "name": "user1", "is_robot": false, "avatar": { "name": "user1", "hash": "5d40b245471708144de9760f2f18113d75aa2488ec82e12435b9de34a6565f73", "color": "#ad494a", "kind": "user" } }, "namespace": { "kind": "org", "name": "namespace1", "avatar": { "name": "namespace1", "hash": "6cf18b5c19217bfc6df0e7d788746ff7e8201a68cba333fca0437e42379b984f", "color": "#e377c2", "kind": "org" } } }, ... { "kind": "create_repo", "metadata": { "repo": "t2", "namespace": "namespace1" }, "ip": "10.131.0.13", "datetime": "Mon, 13 Dec 2021 11:25:17 -0000", "performer": { "kind": "user", "name": "user1", "is_robot": false, "avatar": { "name": "user1", "hash": "5d40b245471708144de9760f2f18113d75aa2488ec82e12435b9de34a6565f73", "color": "#ad494a", "kind": "user" } }, "namespace": { "kind": "org", "name": "namespace1", "avatar": { "name": "namespace1", "hash": "6cf18b5c19217bfc6df0e7d788746ff7e8201a68cba333fca0437e42379b984f", "color": "#e377c2", "kind": "org" } } } ], "next_page": "gAAAAABhtzGDsH38x7pjWhD8MJq1_2FAgqUw2X9S2LoCLNPH65QJqB4XAU2qAxYb6QqtlcWj9eI6DUiMN_q3e3I0agCvB2VPQ8rY75WeaiUzM3rQlMc4i6ElR78t8oUxVfNp1RMPIRQYYZyXP9h6E8LZZhqTMs0S-SedaQJ3kVFtkxZqJwHVjgt23Ts2DonVoYwtKgI3bCC5" } Second invocation using next_page USD curl -X GET -k -H "Authorization: Bearer qz9NZ2Np1f55CSZ3RVOvxjeUdkzYuCp0pKggABCD" https://example-registry-quay-quay-enterprise.apps.example.com/api/v1/superuser/logs?next_page=gAAAAABhtzGDsH38x7pjWhD8MJq1_2FAgqUw2X9S2LoCLNPH65QJqB4XAU2qAxYb6QqtlcWj9eI6DUiMN_q3e3I0agCvB2VPQ8rY75WeaiUzM3rQlMc4i6ElR78t8oUxVfNp1RMPIRQYYZyXP9h6E8LZZhqTMs0S-SedaQJ3kVFtkxZqJwHVjgt23Ts2DonVoYwtKgI3bCC5 | jq Output from second invocation { "start_time": "Sun, 12 Dec 2021 11:42:46 -0000", "end_time": "Tue, 14 Dec 2021 11:42:46 -0000", "logs": [ { "kind": "create_repo", "metadata": { "repo": "t1", "namespace": "namespace1" }, "ip": "10.131.0.13", "datetime": "Mon, 13 Dec 2021 11:25:07 -0000", "performer": { "kind": "user", "name": "user1", "is_robot": false, "avatar": { "name": "user1", "hash": "5d40b245471708144de9760f2f18113d75aa2488ec82e12435b9de34a6565f73", "color": "#ad494a", "kind": "user" } }, "namespace": { "kind": "org", "name": "namespace1", "avatar": { "name": "namespace1", "hash": "6cf18b5c19217bfc6df0e7d788746ff7e8201a68cba333fca0437e42379b984f", "color": "#e377c2", "kind": "org" } } }, ... ] } 15.4.4. Directory synchronization To enable directory synchronization for the team newteam in organization testadminorg , where the corresponding group name in LDAP is ldapgroup : To disable synchronization for the same team: 15.4.5. Create a repository build via API In order to build a repository from the specified input and tag the build with custom tags, users can use requestRepoBuild endpoint. It takes the following data: The archive_url parameter should point to a tar or zip archive that includes the Dockerfile and other required files for the build. The file_id parameter was apart of our older build system. It cannot be used anymore. If Dockerfile is in a sub-directory it needs to be specified as well. The archive should be publicly accessible. OAuth app should have "Administer Organization" scope because only organization admins have access to the robots' account tokens. Otherwise, someone could get robot permissions by simply granting a build access to a robot (without having access themselves), and use it to grab the image contents. In case of errors, check the json block returned and ensure the archive location, pull robot, and other parameters are being passed correctly. Click "Download logs" on the top-right of the individual build's page to check the logs for more verbose messaging. 15.4.6. Create an org robot 15.4.7. Trigger a build Python with requests 15.4.8. Create a private repository 15.4.9. Create a mirrored repository Minimal configuration curl -X POST -H "Authorization: Bearer USD{bearer_token}" -H "Content-Type: application/json" --data '{"external_reference": "quay.io/minio/mc", "external_registry_username": "", "sync_interval": 600, "sync_start_date": "2021-08-06T11:11:39Z", "root_rule": {"rule_kind": "tag_glob_csv", "rule_value": [ "latest" ]}, "robot_username": "orga+robot"}' https://USD{quay_registry}/api/v1/repository/USD{orga}/USD{repo}/mirror | jq Extended configuration USD curl -X POST -H "Authorization: Bearer USD{bearer_token}" -H "Content-Type: application/json" --data '{"is_enabled": true, "external_reference": "quay.io/minio/mc", "external_registry_username": "username", "external_registry_password": "password", "external_registry_config": {"unsigned_images":true, "verify_tls": false, "proxy": {"http_proxy": "http://proxy.tld", "https_proxy": "https://proxy.tld", "no_proxy": "domain"}}, "sync_interval": 600, "sync_start_date": "2021-08-06T11:11:39Z", "root_rule": {"rule_kind": "tag_glob_csv", "rule_value": [ "*" ]}, "robot_username": "orga+robot"}' https://USD{quay_registry}/api/v1/repository/USD{orga}/USD{repo}/mirror | jq
[ "https://docs.quay.io/api/swagger/", "https://<yourquayhost>/api/v1/discovery.", "export SERVER_HOSTNAME=<yourhostname> sudo podman run -p 8888:8080 -e API_URL=https://USDSERVER_HOSTNAME:8443/api/v1/discovery docker.io/swaggerapi/swagger-ui", "BROWSER_API_CALLS_XHR_ONLY: false", "curl -X GET -H \"Authorization: Bearer <token_here>\" \"https://<yourquayhost>/api/v1/superuser/users/\"", "curl -X GET -H \"Authorization: Bearer mFCdgS7SAIoMcnTsHCGx23vcNsTgziAa4CmmHIsg\" http://quay-server:8080/api/v1/superuser/users/ | jq { \"users\": [ { \"kind\": \"user\", \"name\": \"quayadmin\", \"username\": \"quayadmin\", \"email\": \"[email protected]\", \"verified\": true, \"avatar\": { \"name\": \"quayadmin\", \"hash\": \"357a20e8c56e69d6f9734d23ef9517e8\", \"color\": \"#5254a3\", \"kind\": \"user\" }, \"super_user\": true, \"enabled\": true } ] }", "curl -H \"Content-Type: application/json\" -H \"Authorization: Bearer Fava2kV9C92p1eXnMawBZx9vTqVnksvwNm0ckFKZ\" -X POST --data '{ \"username\": \"quaysuper\", \"email\": \"[email protected]\" }' http://quay-server:8080/api/v1/superuser/users/ | jq", "{ \"username\": \"quaysuper\", \"email\": \"[email protected]\", \"password\": \"EH67NB3Y6PTBED8H0HC6UVHGGGA3ODSE\", \"encrypted_password\": \"fn37AZAUQH0PTsU+vlO9lS0QxPW9A/boXL4ovZjIFtlUPrBz9i4j9UDOqMjuxQ/0HTfy38goKEpG8zYXVeQh3lOFzuOjSvKic2Vq7xdtQsU=\" }", "curl -X GET -H \"Authorization: Bearer mFCdgS7SAIoMcnTsHCGx23vcNsTgziAa4CmmHIsg\" http://quay-server:8080/api/v1/superuser/users/ | jq { \"users\": [ { \"kind\": \"user\", \"name\": \"quayadmin\", \"username\": \"quayadmin\", \"email\": \"[email protected]\", \"verified\": true, \"avatar\": { \"name\": \"quayadmin\", \"hash\": \"357a20e8c56e69d6f9734d23ef9517e8\", \"color\": \"#5254a3\", \"kind\": \"user\" }, \"super_user\": true, \"enabled\": true }, { \"kind\": \"user\", \"name\": \"quaysuper\", \"username\": \"quaysuper\", \"email\": \"[email protected]\", \"verified\": true, \"avatar\": { \"name\": \"quaysuper\", \"hash\": \"c0e0f155afcef68e58a42243b153df08\", \"color\": \"#969696\", \"kind\": \"user\" }, \"super_user\": true, \"enabled\": true } ] }", "curl -X GET -k -H \"Authorization: Bearer qz9NZ2Np1f55CSZ3RVOvxjeUdkzYuCp0pKggABCD\" https://example-registry-quay-quay-enterprise.apps.example.com/api/v1/superuser/logs | jq", "{ \"start_time\": \"Sun, 12 Dec 2021 11:41:55 -0000\", \"end_time\": \"Tue, 14 Dec 2021 11:41:55 -0000\", \"logs\": [ { \"kind\": \"create_repo\", \"metadata\": { \"repo\": \"t21\", \"namespace\": \"namespace1\" }, \"ip\": \"10.131.0.13\", \"datetime\": \"Mon, 13 Dec 2021 11:41:16 -0000\", \"performer\": { \"kind\": \"user\", \"name\": \"user1\", \"is_robot\": false, \"avatar\": { \"name\": \"user1\", \"hash\": \"5d40b245471708144de9760f2f18113d75aa2488ec82e12435b9de34a6565f73\", \"color\": \"#ad494a\", \"kind\": \"user\" } }, \"namespace\": { \"kind\": \"org\", \"name\": \"namespace1\", \"avatar\": { \"name\": \"namespace1\", \"hash\": \"6cf18b5c19217bfc6df0e7d788746ff7e8201a68cba333fca0437e42379b984f\", \"color\": \"#e377c2\", \"kind\": \"org\" } } }, { \"kind\": \"create_repo\", \"metadata\": { \"repo\": \"t20\", \"namespace\": \"namespace1\" }, \"ip\": \"10.131.0.13\", \"datetime\": \"Mon, 13 Dec 2021 11:41:05 -0000\", \"performer\": { \"kind\": \"user\", \"name\": \"user1\", \"is_robot\": false, \"avatar\": { \"name\": \"user1\", \"hash\": \"5d40b245471708144de9760f2f18113d75aa2488ec82e12435b9de34a6565f73\", \"color\": \"#ad494a\", \"kind\": \"user\" } }, \"namespace\": { \"kind\": \"org\", \"name\": \"namespace1\", \"avatar\": { \"name\": \"namespace1\", \"hash\": \"6cf18b5c19217bfc6df0e7d788746ff7e8201a68cba333fca0437e42379b984f\", \"color\": \"#e377c2\", \"kind\": \"org\" } } }, { \"kind\": \"create_repo\", \"metadata\": { \"repo\": \"t2\", \"namespace\": \"namespace1\" }, \"ip\": \"10.131.0.13\", \"datetime\": \"Mon, 13 Dec 2021 11:25:17 -0000\", \"performer\": { \"kind\": \"user\", \"name\": \"user1\", \"is_robot\": false, \"avatar\": { \"name\": \"user1\", \"hash\": \"5d40b245471708144de9760f2f18113d75aa2488ec82e12435b9de34a6565f73\", \"color\": \"#ad494a\", \"kind\": \"user\" } }, \"namespace\": { \"kind\": \"org\", \"name\": \"namespace1\", \"avatar\": { \"name\": \"namespace1\", \"hash\": \"6cf18b5c19217bfc6df0e7d788746ff7e8201a68cba333fca0437e42379b984f\", \"color\": \"#e377c2\", \"kind\": \"org\" } } } ], \"next_page\": \"gAAAAABhtzGDsH38x7pjWhD8MJq1_2FAgqUw2X9S2LoCLNPH65QJqB4XAU2qAxYb6QqtlcWj9eI6DUiMN_q3e3I0agCvB2VPQ8rY75WeaiUzM3rQlMc4i6ElR78t8oUxVfNp1RMPIRQYYZyXP9h6E8LZZhqTMs0S-SedaQJ3kVFtkxZqJwHVjgt23Ts2DonVoYwtKgI3bCC5\" }", "curl -X GET -k -H \"Authorization: Bearer qz9NZ2Np1f55CSZ3RVOvxjeUdkzYuCp0pKggABCD\" https://example-registry-quay-quay-enterprise.apps.example.com/api/v1/superuser/logs?next_page=gAAAAABhtzGDsH38x7pjWhD8MJq1_2FAgqUw2X9S2LoCLNPH65QJqB4XAU2qAxYb6QqtlcWj9eI6DUiMN_q3e3I0agCvB2VPQ8rY75WeaiUzM3rQlMc4i6ElR78t8oUxVfNp1RMPIRQYYZyXP9h6E8LZZhqTMs0S-SedaQJ3kVFtkxZqJwHVjgt23Ts2DonVoYwtKgI3bCC5 | jq", "{ \"start_time\": \"Sun, 12 Dec 2021 11:42:46 -0000\", \"end_time\": \"Tue, 14 Dec 2021 11:42:46 -0000\", \"logs\": [ { \"kind\": \"create_repo\", \"metadata\": { \"repo\": \"t1\", \"namespace\": \"namespace1\" }, \"ip\": \"10.131.0.13\", \"datetime\": \"Mon, 13 Dec 2021 11:25:07 -0000\", \"performer\": { \"kind\": \"user\", \"name\": \"user1\", \"is_robot\": false, \"avatar\": { \"name\": \"user1\", \"hash\": \"5d40b245471708144de9760f2f18113d75aa2488ec82e12435b9de34a6565f73\", \"color\": \"#ad494a\", \"kind\": \"user\" } }, \"namespace\": { \"kind\": \"org\", \"name\": \"namespace1\", \"avatar\": { \"name\": \"namespace1\", \"hash\": \"6cf18b5c19217bfc6df0e7d788746ff7e8201a68cba333fca0437e42379b984f\", \"color\": \"#e377c2\", \"kind\": \"org\" } } }, ] }", "curl -X POST -H \"Authorization: Bearer 9rJYBR3v3pXcj5XqIA2XX6Thkwk4gld4TCYLLWDF\" -H \"Content-type: application/json\" -d '{\"group_dn\": \"cn=ldapgroup,ou=Users\"}' http://quay1-server:8080/api/v1/organization/testadminorg/team/newteam/syncing", "curl -X DELETE -H \"Authorization: Bearer 9rJYBR3v3pXcj5XqIA2XX6Thkwk4gld4TCYLLWDF\" http://quay1-server:8080/api/v1/organization/testadminorg/team/newteam/syncing", "{ \"docker_tags\": [ \"string\" ], \"pull_robot\": \"string\", \"subdirectory\": \"string\", \"archive_url\": \"string\" }", "curl -X PUT https://quay.io/api/v1/organization/{orgname}/robots/{robot shortname} -H 'Authorization: Bearer <token>''", "curl -X POST https://quay.io/api/v1/repository/YOURORGNAME/YOURREPONAME/build/ -H 'Authorization: Bearer <token>'", "import requests r = requests.post('https://quay.io/api/v1/repository/example/example/image', headers={'content-type': 'application/json', 'Authorization': 'Bearer <redacted>'}, data={[<request-body-contents>}) print(r.text)", "curl -X POST https://quay.io/api/v1/repository -H 'Authorization: Bearer {token}' -H 'Content-Type: application/json' -d '{\"namespace\":\"yournamespace\", \"repository\":\"yourreponame\", \"description\":\"descriptionofyourrepo\", \"visibility\": \"private\"}' | jq", "curl -X POST -H \"Authorization: Bearer USD{bearer_token}\" -H \"Content-Type: application/json\" --data '{\"external_reference\": \"quay.io/minio/mc\", \"external_registry_username\": \"\", \"sync_interval\": 600, \"sync_start_date\": \"2021-08-06T11:11:39Z\", \"root_rule\": {\"rule_kind\": \"tag_glob_csv\", \"rule_value\": [ \"latest\" ]}, \"robot_username\": \"orga+robot\"}' https://USD{quay_registry}/api/v1/repository/USD{orga}/USD{repo}/mirror | jq", "curl -X POST -H \"Authorization: Bearer USD{bearer_token}\" -H \"Content-Type: application/json\" --data '{\"is_enabled\": true, \"external_reference\": \"quay.io/minio/mc\", \"external_registry_username\": \"username\", \"external_registry_password\": \"password\", \"external_registry_config\": {\"unsigned_images\":true, \"verify_tls\": false, \"proxy\": {\"http_proxy\": \"http://proxy.tld\", \"https_proxy\": \"https://proxy.tld\", \"no_proxy\": \"domain\"}}, \"sync_interval\": 600, \"sync_start_date\": \"2021-08-06T11:11:39Z\", \"root_rule\": {\"rule_kind\": \"tag_glob_csv\", \"rule_value\": [ \"*\" ]}, \"robot_username\": \"orga+robot\"}' https://USD{quay_registry}/api/v1/repository/USD{orga}/USD{repo}/mirror | jq" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.10/html/use_red_hat_quay/using_the_red_hat_quay_api
Chapter 7. Collecting OpenShift sandboxed containers data
Chapter 7. Collecting OpenShift sandboxed containers data When troubleshooting OpenShift sandboxed containers, you can open a support case and provide debugging information using the must-gather tool. If you are a cluster administrator, you can also review logs on your own, enabling a more detailed level of logs. 7.1. Collecting OpenShift sandboxed containers data for Red Hat Support When opening a support case, it is helpful to provide debugging information about your cluster to Red Hat Support. The must-gather tool enables you to collect diagnostic information about your OpenShift Container Platform cluster, including virtual machines and other data related to OpenShift sandboxed containers. For prompt support, supply diagnostic information for both OpenShift Container Platform and OpenShift sandboxed containers. 7.1.1. About the must-gather tool The oc adm must-gather CLI command collects the information from your cluster that is most likely needed for debugging issues, including: Resource definitions Service logs By default, the oc adm must-gather command uses the default plugin image and writes into ./must-gather.local . Alternatively, you can collect specific information by running the command with the appropriate arguments as described in the following sections: To collect data related to one or more specific features, use the --image argument with an image, as listed in a following section. For example: USD oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.11.0 To collect the audit logs, use the -- /usr/bin/gather_audit_logs argument, as described in a following section. For example: USD oc adm must-gather -- /usr/bin/gather_audit_logs Note Audit logs are not collected as part of the default set of information to reduce the size of the files. When you run oc adm must-gather , a new pod with a random name is created in a new project on the cluster. The data is collected on that pod and saved in a new directory that starts with must-gather.local . This directory is created in the current working directory. For example: NAMESPACE NAME READY STATUS RESTARTS AGE ... openshift-must-gather-5drcj must-gather-bklx4 2/2 Running 0 72s openshift-must-gather-5drcj must-gather-s8sdh 2/2 Running 0 72s ... To collect OpenShift sandboxed containers data with must-gather , you must specify the OpenShift sandboxed containers image: --image=registry.redhat.io/openshift-sandboxed-containers/osc-must-gather-rhel8:1.3.0 7.2. About OpenShift sandboxed containers log data When you collect log data about your cluster, the following features and objects are associated with OpenShift sandboxed containers: All namespaces and their child objects that belong to any OpenShift sandboxed containers resources All OpenShift sandboxed containers custom resource definitions (CRDs) The following OpenShift sandboxed containers component logs are collected for each pod running with the kata runtime: Kata agent logs Kata runtime logs QEMU logs Audit logs CRI-O logs 7.3. Enabling debug logs for OpenShift sandboxed containers As a cluster administrator, you can collect a more detailed level of logs for OpenShift sandboxed containers. You can also enhance logging by changing the logLevel field in the KataConfig CR. This changes the log_level in the CRI-O runtime for the worker nodes running OpenShift sandboxed containers. Procedure Change the logLevel field in your existing KataConfig CR to debug : USD oc patch kataconfig <name_of_kataconfig_file> --type merge --patch '{"spec":{"logLevel":"debug"}}' Note When running this command, reference the name of your KataConfig CR. This is the name you used to create the CR when setting up OpenShift sandboxed containers. Verification Monitor the kata-oc machine config pool until the UPDATED field appears as True , meaning all worker nodes are updated: USD oc get mcp kata-oc Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE kata-oc rendered-kata-oc-169 False True False 3 1 1 0 9h Verify that the log_level was updated in CRI-O: Open an oc debug session to a node in the machine config pool and run chroot /host . USD oc debug node/<node_name> sh-4.4# chroot /host Verify the changes in the crio.conf file: sh-4.4# crio config | egrep 'log_level Example output log_level = "debug" 7.3.1. Viewing debug logs for OpenShift sandboxed containers Cluster administrators can use the enhanced debug logs for OpenShift sandboxed containers to troubleshoot issues. The logs for each node are printed to the node journal. You can review the logs for the following OpenShift sandboxed containers components: Kata agent Kata runtime ( containerd-shim-kata-v2 ) virtiofsd Logs for QEMU do not print to the node journal. However, a QEMU failure is reported to the runtime, and the console of the QEMU guest is printed to the node journal. You can view these logs together with the Kata agent logs. Prerequisites You have installed the OpenShift CLI ( oc ). You have access to the cluster as a user with the cluster-admin role. Procedure To review the Kata agent logs and guest console logs, run: USD oc debug node/<nodename> -- journalctl -D /host/var/log/journal -t kata -g "reading guest console" To review the kata runtime logs, run: USD oc debug node/<nodename> -- journalctl -D /host/var/log/journal -t kata To review the virtiofsd logs, run: USD oc debug node/<nodename> -- journalctl -D /host/var/log/journal -t virtiofsd 7.4. Additional resources For more information about gathering data for support, see Gathering data about your cluster .
[ "oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.11.0", "oc adm must-gather -- /usr/bin/gather_audit_logs", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-must-gather-5drcj must-gather-bklx4 2/2 Running 0 72s openshift-must-gather-5drcj must-gather-s8sdh 2/2 Running 0 72s", "--image=registry.redhat.io/openshift-sandboxed-containers/osc-must-gather-rhel8:1.3.0", "oc patch kataconfig <name_of_kataconfig_file> --type merge --patch '{\"spec\":{\"logLevel\":\"debug\"}}'", "oc get mcp kata-oc", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE kata-oc rendered-kata-oc-169 False True False 3 1 1 0 9h", "oc debug node/<node_name>", "sh-4.4# chroot /host", "sh-4.4# crio config | egrep 'log_level", "log_level = \"debug\"", "oc debug node/<nodename> -- journalctl -D /host/var/log/journal -t kata -g \"reading guest console\"", "oc debug node/<nodename> -- journalctl -D /host/var/log/journal -t kata", "oc debug node/<nodename> -- journalctl -D /host/var/log/journal -t virtiofsd" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/sandboxed_containers_support_for_openshift/troubleshooting-sandboxed-containers
Chapter 5. Converting physical machines to virtual machines
Chapter 5. Converting physical machines to virtual machines Warning The Red Hat Enterprise Linux 6 version of the virt-v2v utility has been deprecated. Users of Red Hat Enterprise Linux 6 are advised to create a Red Hat Enterprise 7 virtual machine, and install virt-v2v in that virtual machine. The Red Hat Enterprise Linux 7 version is fully supported and documented in virt-v2v Knowledgebase articles . Read this chapter for information about converting physical machines to virtual machines with the Red Hat Physical-to-Virtual (P2V) solution, Virt P2V. Virt P2V is comprised of both virt-p2v-server , included in the virt-v2v package, and the P2V client, available from the Red Hat Customer Portal as rhel-6.x-p2v.iso . rhel-6.x-p2v.iso is a bootable disk image based on a customized Red Hat Enterprise Linux 6 image. Booting a machine from rhel-6.x-p2v.iso and connecting to a V2V conversion server that has virt-v2v installed allows data from the physical machine to be uploaded to the conversion server and converted for use with either Red Hat Enterprise Virtualization, or KVM managed by libvirt . Note that the host must be running Red Hat Enterprise Linux 6. Other host configurations will not work. Important Adhere to the following rules. Failure to do so may cause the loss of data and disk malfunction: The Physical to Virtual (P2V) feature requires a Red Hat Enterprise Linux 6 virtualization host with virt-v2v version 0.8.7 or later. You can check your version of virt-v2v by running USD rpm -q virt-v2v . Note that you cannot convert to a Red Hat Enterprise Linux 5 conversion server, or with a virt-v2v package to version 0.8.7-6.el6. A number of operating systems can be converted from physical machines to virtual machines, but be aware that there are known issues converting physical machines using software RAID. Red Hat Enterprise Linux 6 machines with a filesystem root on a software RAID md device may be converted to guest virtual machines. Red Hat Enterprise Linux 4 and Red Hat Enterprise Linux 5 physical machines with their filesystem root on a software RAID md device cannot be converted to virtual machines. There is currently no workaround available. 5.1. Prerequisites For a physical machine to be converted using the P2V client, it must meet basic hardware requirements in order to successfully boot the P2V client: Must be bootable from PXE, Optical Media (CD, DVD), or USB. At least 512 MB of RAM. An ethernet connection. Console access (keyboard, video, mouse). An operating system supported by virt-v2v : Red Hat Enterprise Linux 3.9 Red Hat Enterprise Linux 4 Red Hat Enterprise Linux 5 Red Hat Enterprise Linux 6 Windows XP Windows Vista Windows 7 Windows Server 2003 Windows Server 2008
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/v2v_guide/chap-v2v_guide-p2v_migration_converting_physical_machines_to_virtual_machines
Chapter 5. Mirroring Ceph block devices
Chapter 5. Mirroring Ceph block devices As a storage administrator, you can add another layer of redundancy to Ceph block devices by mirroring data images between Red Hat Ceph Storage clusters. Understanding and using Ceph block device mirroring can provide you protection against data loss, such as a site failure. There are two configurations for mirroring Ceph block devices, one-way mirroring or two-way mirroring, and you can configure mirroring on pools and individual images. 5.1. Prerequisites A minimum of two healthy running Red Hat Ceph Storage clusters. Network connectivity between the two storage clusters. Access to a Ceph client node for each Red Hat Ceph Storage cluster. 5.2. Ceph block device mirroring RADOS Block Device (RBD) mirroring is a process of asynchronous replication of Ceph block device images between two or more Ceph storage clusters. By locating a Ceph storage cluster in different geographic locations, RBD Mirroring can help you recover from a site disaster. Journal-based Ceph block device mirroring ensures point-in-time consistent replicas of all changes to an image, including reads and writes, block device resizing, snapshots, clones and flattening. RBD mirroring uses exclusive locks and the journaling feature to record all modifications to an image in the order in which they occur. This ensures that a crash-consistent mirror of an image is available. Important The CRUSH hierarchies supporting primary and secondary pools that mirror block device images must have the same capacity and performance characteristics, and must have adequate bandwidth to ensure mirroring without excess latency. For example, if you have X MB/s average write throughput to images in the primary storage cluster, the network must support N * X throughput in the network connection to the secondary site plus a safety factor of Y% to mirror N images. The rbd-mirror daemon is responsible for synchronizing images from one Ceph storage cluster to another Ceph storage cluster by pulling changes from the remote primary image and writes those changes to the local, non-primary image. The rbd-mirror daemon can run either on a single Ceph storage cluster for one-way mirroring or on two Ceph storage clusters for two-way mirroring that participate in the mirroring relationship. For RBD mirroring to work, either using one-way or two-way replication, a couple of assumptions are made: A pool with the same name exists on both storage clusters. A pool contains journal-enabled images you want to mirror. Important In one-way or two-way replication, each instance of rbd-mirror must be able to connect to the other Ceph storage cluster simultaneously. Additionally, the network must have sufficient bandwidth between the two data center sites to handle mirroring. One-way Replication One-way mirroring implies that a primary image or pool of images in one storage cluster gets replicated to a secondary storage cluster. One-way mirroring also supports replicating to multiple secondary storage clusters. On the secondary storage cluster, the image is the non-primary replicate; that is, Ceph clients cannot write to the image. When data is mirrored from a primary storage cluster to a secondary storage cluster, the rbd-mirror runs ONLY on the secondary storage cluster. For one-way mirroring to work, a couple of assumptions are made: You have two Ceph storage clusters and you want to replicate images from a primary storage cluster to a secondary storage cluster. The secondary storage cluster has a Ceph client node attached to it running the rbd-mirror daemon. The rbd-mirror daemon will connect to the primary storage cluster to sync images to the secondary storage cluster. Two-way Replication Two-way replication adds an rbd-mirror daemon on the primary cluster so images can be demoted on it and promoted on the secondary cluster. Changes can then be made to the images on the secondary cluster and they will be replicated in the reverse direction, from secondary to primary. Both clusters must have rbd-mirror running to allow promoting and demoting images on either cluster. Currently, two-way replication is only supported between two sites. For two-way mirroring to work, a couple of assumptions are made: You have two storage clusters and you want to be able to replicate images between them in either direction. Both storage clusters have a client node attached to them running the rbd-mirror daemon. The rbd-mirror daemon running on the secondary storage cluster will connect to the primary storage cluster to synchronize images to secondary, and the rbd-mirror daemon running on the primary storage cluster will connect to the secondary storage cluster to synchronize images to primary. Note As of Red Hat Ceph Storage 4, running multiple active rbd-mirror daemons in a single cluster is supported. Mirroring Modes Mirroring is configured on a per-pool basis with mirror peering storage clusters. Ceph supports two mirroring modes, depending on the type of images in the pool. Pool Mode All images in a pool with the journaling feature enabled are mirrored. Image Mode Only a specific subset of images within a pool are mirrored. You must enable mirroring for each image separately. Image States Whether or not an image can be modified depends on its state: Images in the primary state can be modified. Images in the non-primary state cannot be modified. Images are automatically promoted to primary when mirroring is first enabled on an image. The promotion can happen: Implicitly by enabling mirroring in pool mode. Explicitly by enabling mirroring of a specific image. It is possible to demote primary images and promote non-primary images. Additional Resources See the Image promotion and demotion section of the Red Hat Ceph Storage Block Device Guide for more details. 5.3. Configuring one-way mirroring using Ansible This procedure uses ceph-ansible to configure one-way replication of images on a primary storage cluster known as site-a , to a secondary storage cluster known as site-b . In the following examples, data is the name of the pool that contains the images to be mirrored. Prerequisites Two running Red Hat Ceph Storage clusters. A Ceph client node. A pool with the same name exists on both clusters. Images within the pool must have exclusive-lock and journaling enabled for journal-based mirroring. Note When using one-way replication, you can mirror to multiple secondary storage clusters. Procedure On the cluster where the images originate, enable the exclusive-lock and journaling features on an image. For new images , use the --image-feature option: Syntax Example For existing images , use the rbd feature enable command: Syntax Example To enable exclusive-lock and journaling on all new images by default, add the following setting to the Ceph configuration file: In the site-a cluster, complete the following steps: On a monitor node, create the user that the rbd-mirror daemon will use to connect to the cluster. The example creates a site-a user and outputs the key to a file named site-a.client.site-a.keyring : Syntax Example Copy the Ceph configuration file and the newly created key file from the monitor node to the site-b monitor and client nodes. Rename the Ceph configuration file from ceph.conf to CLUSTER-NAME .conf. In these examples, the file is /etc/ceph/site-a.conf . In the site-b cluster, complete the following steps: On the Ansible administration node, add an [rbdmirrors] group in the Ansible inventory file. The usual inventory file is /etc/ansible/hosts . Under the [rbdmirrors] group, add the name of the site-b client node on which the rbd-mirror daemon will run. The daemon will pull image changes from site-a to site-b . Navigate to the /usr/share/ceph-ansible/ directory: Create a new rbdmirrors.yml file by copying group_vars/rbdmirrors.yml.sample to group_vars/rbdmirrors.yml : Open the group_vars/rbdmirrors.yml file for editing. Set ceph_rbd_mirror_configure to true . Set ceph_rbd_mirror_pool to the pool in which you want to mirror images. In these examples, data is the name of the pool. By default, ceph-ansible configures mirroring using pool mode, which mirrors all images in a pool. Enable image mode where only images that have mirroring explicitly enabled are mirrored. To enable image mode, set ceph_rbd_mirror_mode to image : Set a name for the cluster that rbd-mirror will pull from. In these examples, the other cluster is site-a . On the Ansible administration node, set the user name of the key using ceph_rbd_mirror_remote_user . Use the same name you used when you created the key. In these examples the user is named client.site-a . As the ceph-ansible user, run the Ansible playbook: Bare-metal deployments: Container deployments: Explicitly enable mirroring on the desired images in both site-a and site-b clusters: Syntax Journal-based mirroring: Snapshot-based mirroring: Example Note Repeat this step whenever you want to mirror new image to peer cluster. Verify the mirroring status. Run the following command from a Ceph Monitor node in the site-b cluster: Example Journal-based mirroring: Snapshot-based mirroring: 1 1 If images are in the state up+replaying , then mirroring is functioning properly. Note Based on the connection between the sites, mirroring can take a long time to sync the images. 5.4. Configuring two-way mirroring using Ansible This procedure uses ceph-ansible to configure two-way replication so images can be mirrored in either direction between two clusters known as site-a and site-b . In the following examples, data is the name of the pool that contains the images to be mirrored. Note Two-way mirroring does not allow simultaneous writes to be made to the same image on either cluster. Images are promoted on one cluster and demoted on another. Depending on their status, they will mirror in one direction or the other. Prerequisites Two running Red Hat Ceph Storage clusters. Each cluster has a client node. A pool with the same name exists on both clusters. Images within the pool must have exclusive-lock and journaling enabled for journal-based mirroring. Procedure On the cluster where the images originate, enable the exclusive-lock and journaling features on an image. For new images , use the --image-feature option: Syntax Example For existing images , use the rbd feature enable command: Syntax Example To enable exclusive-lock and journaling on all new images by default, add the following setting to the Ceph configuration file: In the site-a cluster, complete the following steps: On a monitor node, create the user the rbd-mirror daemon will use to connect to the cluster. The example creates a site-a user and outputs the key to a file named site-a.client.site-a.keyring , and the Ceph configuration file is /etc/ceph/site-a.conf . Syntax Example Copy the keyring to the site-b cluster. Copy the file to the client node in the site-b cluster that the rbd-daemon will run on. Save the file to /etc/ceph/site-a.client.site-a.keyring : Syntax Example Copy the Ceph configuration file from the monitor node to the site-b monitor node and client nodes. The Ceph configuration file in this example is /etc/ceph/site-a.conf . Syntax Example In the site-b cluster, complete the following steps: Configure mirroring from site-a to site-b . On the Ansible administration node, add an [rbdmirrors] group in the Ansible inventory file, usually /usr/share/ceph-ansible/hosts . Under the [rbdmirrors] group, add the name of a site-b client node that the rbd-mirror daemon will run on. This daemon pulls image changes from site-a to site-b . Example Navigate to the /usr/share/ceph-ansible/ directory: Create a new rbdmirrors.yml file by copying group_vars/rbdmirrors.yml.sample to group_vars/rbdmirrors.yml : Open for editing the group_vars/rbdmirrors.yml file. Set ceph_rbd_mirror_configure to true , and set ceph_rbd_mirror_pool to the pool you want to mirror images in. In these examples, data is the name of the pool. By default, ceph-ansible configures mirroring using pool mode, which mirrors all images in a pool. Enable image mode where only images that have mirroring explicitly enabled are mirrored. To enable image mode, set ceph_rbd_mirror_mode to image : Set a name for the cluster that rbd-mirror in the group_vars/rbdmirrors.yml file. In these examples, the other cluster is site-a . On the Ansible administration node, set the user name of the key using ceph_rbd_mirror_remote_user in the group_vars/rbdmirrors.yml file. Use the same name you used when you created the key. In these examples the user is named client.site-a . As the ansible user, run the Ansible playbook: Bare-metal deployments: Container deployments: Verify the mirroring status. Run the following command from a Ceph Monitor node on the site-b cluster: Example Journal-based mirroring: Snapshot-based mirroring: 1 1 If images are in the state up+replaying , then mirroring is functioning properly. Note Based on the connection between the sites, mirroring can take a long time to sync the images. In the site-b cluster, complete the following steps. The steps are largely the same as above: On a monitor node, create the user the rbd-mirror daemon will use to connect to the cluster. The example creates a site-b user and outputs the key to a file named site-b.client.site-b.keyring , and the Ceph configuration file is /etc/ceph/site-b.conf . Syntax Example Copy the keyring to the site-a cluster. Copy the file to the client node in the site-a cluster that the rbd-daemon will run on. Save the file to /etc/ceph/site-b.client.site-b.keyring : Syntax Example Copy the Ceph configuration file from the monitor node to the site-a monitor node and client nodes. The Ceph configuration file in this example is /etc/ceph/site-b.conf . Syntax Example In the site-a cluster, complete the following steps: Configure mirroring from site-b to site-a . On the Ansible administration node, add an [rbdmirrors] group in the Ansible inventory file, usually /usr/share/ceph-ansible/hosts . Under the [rbdmirrors] group, add the name of a site-a client node that the rbd-mirror daemon will run on. This daemon pulls image changes from site-b to site-a . Example Navigate to the /usr/share/ceph-ansible/ directory: Create a new rbdmirrors.yml file by copying group_vars/rbdmirrors.yml.sample to group_vars/rbdmirrors.yml : Open for editing the group_vars/rbdmirrors.yml file. Set ceph_rbd_mirror_configure to true , and set ceph_rbd_mirror_pool to the pool you want to mirror images in. In these examples, data is the name of the pool. By default, ceph-ansible configures mirroring using pool mode which mirrors all images in a pool. Enable image mode where only images that have mirroring explicitly enabled are mirrored. To enable image mode, set ceph_rbd_mirror_mode to image : On the Ansible administration node, set a name for the cluster that rbd-mirror in the group_vars/rbdmirrors.yml file. Following the examples, the other cluster is named site-b . On the Ansible administration node, set the user name of the key using ceph_rbd_mirror_remote_user in group_vars/rbdmirrors.yml file. In these examples the user is named client.site-b . As the Ansible user on the administration node, run the Ansible playbook: Bare-metal deployments: Container deployments: Explicitly enable mirroring on the desired images in both site-a and site-b clusters: Syntax Journal-based mirroring: Snapshot-based mirroring: Example Note Repeat this step whenever you want to mirror new image to peer cluster. Verify the mirroring status. Run the following command from the client node on the site-a cluster: Example Journal-based mirroring: Snapshot-based mirroring: 1 1 The images should be in state up+stopped . Here, up means the rbd-mirror daemon is running and stopped means the image is not a target for replication from another cluster. This is because the images are primary on this cluster. 5.5. Configuring one-way mirroring using the command-line interface This procedure configures one-way replication of a pool from the primary storage cluster to a secondary storage cluster. Note When using one-way replication you can mirror to multiple secondary storage clusters. Note Examples in this section will distinguish between two storage clusters by referring to the primary storage cluster with the primary images as site-a , and the secondary storage cluster you are replicating the images to, as site-b . The pool name used in these examples is called data . Prerequisites A minimum of two healthy and running Red Hat Ceph Storage clusters. Root-level access to a Ceph client node for each storage cluster. A CephX user with administrator-level capabilities. Images within the pool must have exclusive-lock and journaling enabled for journal-based mirroring. Procedure Install the rbd-mirror package on the client node connected to the site-b storage cluster: Red Hat Enterprise Linux 7 Red Hat Enterprise Linux 8 Note The package is provided by the Red Hat Ceph Storage Tools repository. Enable the exclusive-lock, and journaling features on an image. For new images , use the --image-feature option: Syntax Example For existing images , use the rbd feature enable command: Syntax Example To enable exclusive-lock and journaling on all new images by default, add the following setting to the Ceph configuration file: Choose the mirroring mode, either pool or image mode. Important Use image mode for snapshot-based mirroring. Enabling pool mode : Syntax Example This example enables mirroring of the whole pool named data . Enabling image mode : Syntax Example This example enables image mode mirroring on the pool named data . Verify that mirroring has been successfully enabled: Syntax Example In the site-a cluster, complete the following steps: On the Ceph client node, create a user: Syntax Example Copy keyring to site-b cluster: Syntax Example On a Ceph client node, bootstrap the storage cluster peers. Register the storage cluster peer to the pool: Syntax Example Note This example bootstrap command creates the client.rbd-mirror-peer Ceph user. Copy the bootstrap token file to the site-b storage cluster. Syntax Example In the site-b cluster, complete the following steps: On the client node, create a user: Syntax Example Copy keyring to the site-a cluster, the Ceph client node: Syntax Example Import the bootstrap token: Syntax Example Note For one-way RBD mirroring, you must use the --direction rx-only argument, as two-way mirroring is the default when bootstrapping peers. Enable and start the rbd-mirror daemon on client node: Syntax Replace CLIENT_ID with the Ceph user created earlier. Example Important Each rbd-mirror daemon must have a unique Client ID. To verify the mirroring status, run the following command from a Ceph Monitor node in the site-a and site-b clusters: Syntax Example Journal-based mirroring: Snapshot-based mirroring: 1 1 Here, up means the rbd-mirror daemon is running, and stopped means this image is not the target for replication from another storage cluster. This is because the image is primary on this storage cluster. Example Journal-based mirroring: Snapshot-based mirroring: 1 1 If images are in the state up+replaying , then mirroring is functioning properly. Here, up means the rbd-mirror daemon is running, and replaying means this image is the target for replication from another storage cluster. Note Depending on the connection between the sites, mirroring can take a long time to sync the images. Additional Resources See the Ceph block device mirroring section in the Red Hat Ceph Storage Block Device Guide for more details. See the User Management section in the Red Hat Ceph Storage Administration Guide for more details on Ceph users. 5.6. Configuring two-way mirroring using the command-line interface This procedure configures two-way replication of a pool between the primary storage cluster, and a secondary storage cluster. Note When using two-way replication you can only mirror between two storage clusters. Note Examples in this section will distinguish between two storage clusters by referring to the primary storage cluster with the primary images as site-a , and the secondary storage cluster you are replicating the images to, as site-b . The pool name used in these examples is called data . Prerequisites A minimum of two healthy and running Red Hat Ceph Storage clusters. Root-level access to a Ceph client node for each storage cluster. A CephX user with administrator-level capabilities. Images within the pool must have exclusive-lock and journaling enabled for journal-based mirroring. Procedure Install the rbd-mirror package on the client node connected to the site-a storage cluster, and the client node connected to the site-b storage cluster: Red Hat Enterprise Linux 7 Red Hat Enterprise Linux 8 Note The package is provided by the Red Hat Ceph Storage Tools repository. Enable the exclusive-lock, and journaling features on an image. For new images , use the --image-feature option: Syntax Example For existing images , use the rbd feature enable command: Syntax Example To enable exclusive-lock and journaling on all new images by default, add the following setting to the Ceph configuration file: Choose the mirroring mode, either pool or image mode. Important Use image mode for snapshot-based mirroring. Enabling pool mode : Syntax Example This example enables mirroring of the whole pool named data . Enabling image mode : Syntax Example This example enables image mode mirroring on the pool named data . Verify that mirroring has been successfully enabled: Syntax Example In the site-a cluster, complete the following steps: On the Ceph client node, create a user: Syntax Example Copy keyring to site-b cluster: Syntax Example On a Ceph client node, bootstrap the storage cluster peers. Register the storage cluster peer to the pool: Syntax Example Note This example bootstrap command creates the client.rbd-mirror-peer Ceph user. Copy the bootstrap token file to the site-b storage cluster. Syntax Example In the site-b cluster, complete the following steps: On the client node, create a user: Syntax Example Copy keyring to the site-a cluster, the Ceph client node: Syntax Example Import the bootstrap token: Syntax Example Note The --direction argument is optional, as two-way mirroring is the default when bootstrapping peers. Enable and start the rbd-mirror daemon on the primary and secondary client nodes: Syntax Replace CLIENT_ID with the Ceph user created earlier. Example In the above example, users are enabled in the primary cluster site-a Example In the above example, users are enabled in the secondary cluster site-b Important Each rbd-mirror daemon must have a unique Client ID. To verify the mirroring status, run the following command from a Ceph Monitor node in the site-a and site-b clusters: Syntax Example Journal-based mirroring: Snapshot-based mirroring: 1 1 Here, up means the rbd-mirror daemon is running, and stopped means this image is not the target for replication from another storage cluster. This is because the image is primary on this storage cluster. Example Journal-based mirroring: Snapshot-based mirroring: 1 1 If images are in the state up+replaying , then mirroring is functioning properly. Here, up means the rbd-mirror daemon is running, and replaying means this image is the target for replication from another storage cluster. Note Depending on the connection between the sites, mirroring can take a long time to sync the images. Additional Resources See the Ceph block device mirroring section in the Red Hat Ceph Storage Block Device Guide for more details. See the User Management section in the Red Hat Ceph Storage Administration Guide for more details on Ceph users. 5.7. Administration for mirroring Ceph block devices As a storage administrator, you can do various tasks to help you manage the Ceph block device mirroring environment. You can do the following tasks: Viewing information about storage cluster peers. Add or remove a storage cluster peer. Getting mirroring status for a pool or image. Enabling mirroring on a pool or image. Disabling mirroring on a pool or image. Delaying block device replication. Promoting and demoting an image. 5.7.1. Prerequisites A minimum of two healthy running Red Hat Ceph Storage cluster. Root-level access to the Ceph client nodes. A one-way or two-way Ceph block device mirroring relationship. 5.7.2. Viewing information about peers View information about storage cluster peers. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure To view information about the peers: Syntax Example 5.7.3. Enabling mirroring on a pool Enable mirroring on a pool by running the following commands on both peer clusters. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure To enable mirroring on a pool: Syntax Example This example enables mirroring of the whole pool named data . Example This example enables image mode mirroring on the pool named data . Additional Resources See the Mirroring Ceph block devices section in the Red Hat Ceph Storage Block Device Guide for details. 5.7.4. Disabling mirroring on a pool Before disabling mirroring, remove the peer clusters. Note When you disable mirroring on a pool, you also disable it on any images within the pool for which mirroring was enabled separately in image mode. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure To disable mirroring on a pool: Syntax Example This example disables mirroring of a pool named data . Additional Resources See the Configuring image one-way mirroring section in the Red Hat Ceph Storage Block Device Guide for details. See the Removing a storage cluster peer section in the Red Hat Ceph Storage Block Device Guide for details. 5.7.5. Enabling image mirroring Enable mirroring on the whole pool in image mode on both peer storage clusters. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure Enable mirroring for a specific image within the pool: Syntax Example This example enables mirroring for the image2 image in the data pool. Additional Resources See the Enabling mirroring on a pool section in the Red Hat Ceph Storage Block Device Guide for details. 5.7.6. Disabling image mirroring Disable the mirror for images. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure To disable mirroring for a specific image: Syntax Example This example disables mirroring of the image2 image in the data pool. 5.7.7. Image promotion and demotion Promote or demote an image. Note Do not force promote non-primary images that are still syncing, because the images will not be valid after the promotion. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure To demote an image to non-primary: Syntax Example This example demotes the image2 image in the data pool. To promote an image to primary: Syntax Example This example promotes image2 in the data pool. Depending on which type of mirroring you are using, see either Recovering from a disaster with one-way mirroring or Recovering from a disaster with two-way mirroring for details. Use the --force option to force promote a non-primary image: Syntax Example Use forced promotion when the demotion cannot be propagated to the peer Ceph storage cluster. For example, because of cluster failure or communication outage. Additional Resources See the Failover after a non-orderly shutdown section in the Red Hat Ceph Storage Block Device Guide for details. 5.7.8. Image resynchronization Re-synchronize an image. In case of an inconsistent state between the two peer clusters, the rbd-mirror daemon does not attempt to mirror the image that is causing the inconsistency. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure To request a resynchronization to the primary image: Syntax Example This example requests resynchronization of image2 in the data pool. Additional Resources To recover from an inconsistent state because of a disaster, see either Recovering from a disaster with one-way mirroring or Recovering from a disaster with two-way mirroring for details. 5.7.9. Adding a storage cluster peer Add a storage cluster peer for the rbd-mirror daemon to discover its peer storage cluster. For example, to add the site-a storage cluster as a peer to the site-b storage cluster, then follow this procedure from the client node in the site-b storage cluster. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure Register the peer to the pool: Syntax Example 5.7.10. Removing a storage cluster peer Remove a storage cluster peer by specifying the peer UUID. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure Specify the pool name and the peer Universally Unique Identifier (UUID). Syntax Example Tip To view the peer UUID, use the rbd mirror pool info command. 5.7.11. Getting mirroring status for a pool Get the mirror status for a pool. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure To get the mirroring pool summary: Syntax Example Tip To output status details for every mirroring image in a pool, use the --verbose option. 5.7.12. Getting mirroring status for a single image Get the mirror status for an image. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure To get the status of a mirrored image: Syntax Example This example gets the status of the image2 image in the data pool. 5.7.13. Delaying block device replication Whether you are using one- or two-way replication, you can delay replication between RADOS Block Device (RBD) mirroring images. You might want to implement delayed replication if you want a window of cushion time in case an unwanted change to the primary image needs to be reverted before being replicated to the secondary image. To implement delayed replication, the rbd-mirror daemon within the destination storage cluster should set the rbd_mirroring_replay_delay = MINIMUM_DELAY_IN_SECONDS configuration option. This setting can either be applied globally within the ceph.conf file utilized by the rbd-mirror daemons, or on an individual image basis. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure To utilize delayed replication for a specific image, on the primary image, run the following rbd CLI command: Syntax Example This example sets a 10 minute minimum replication delay on image vm-1 in the vms pool. 5.7.14. Asynchronous updates and Ceph block device mirroring When updating a storage cluster using Ceph block device mirroring with an asynchronous update, follow the update instruction in the Red Hat Ceph Storage Installation Guide . Once updating is done, restart the Ceph block device instances. Note There is no required order for restarting the instances. Red Hat recommends restarting the instance pointing to the pool with primary images followed by the instance pointing to the mirrored pool. 5.7.15. Creating an image mirror-snapshot Create an image mirror-snapshot when it is required to mirror the changed contents of an RBD image when using snapshot-based mirroring. Prerequisites A minimum of two healthy running Red Hat Ceph Storage clusters. Root-level access to the Ceph client nodes for the Red Hat Ceph Storage clusters. A CephX user with administrator-level capabilities. Access to the Red Hat Ceph Storage cluster where a snapshot mirror will be created. Important By default only 3 image mirror-snapshots can be created per image. The most recent image mirror-snapshot is automatically removed if the limit is reached. If required, the limit can be overridden through the rbd_mirroring_max_mirroring_snapshots configuration. Image mirror-snapshots are automatically deleted when the image is removed or when mirroring is disabled. Procedure To create an image-mirror snapshot: Syntax Example Additional Resources See the Mirroring Ceph block devices section in the Red Hat Ceph Storage Block Device Guide for details. 5.7.16. Scheduling mirror-snapshots Mirror-snapshots can be automatically created when mirror-snapshot schedules are defined. The mirror-snapshot can be scheduled globally, per-pool or per-image levels. Multiple mirror-snapshot schedules can be defined at any level but only the most specific snapshot schedules that match an individual mirrored image will run. Additional Resources See the Mirroring Ceph block devices section in the Red Hat Ceph Storage Block Device Guide for details. 5.7.17. Creating a mirror-snapshot schedule Create a mirror-snapshot schedule. Prerequisites A minimum of two healthy running Red Hat Ceph Storage clusters. Root-level access to the Ceph client nodes for the Red Hat Ceph Storage clusters. A CephX user with administrator-level capabilities. Access to the Red Hat Ceph Storage cluster where a snapshot mirror will be created. Procedure To create a mirror-snapshot schedule: Syntax The interval can be specified in days, hours, or minutes using d, h, or m suffix respectively. The optional START_TIME can be specified using the ISO 8601 time format. Example Scheduling at image level: Scheduling at pool level: Scheduling at global level: Additional Resources See the Mirroring Ceph block devices section in the Red Hat Ceph Storage Block Device Guide for details. 5.7.18. Listing all snapshot schedules at a specific level List all snapshot schedules at a specific level. Prerequisites A minimum of two healthy running Red Hat Ceph Storage clusters. Root-level access to the Ceph client nodes for the Red Hat Ceph Storage clusters. A CephX user with administrator-level capabilities. Access to the Red Hat Ceph Storage cluster where a snapshot mirror will be created. Procedure To list all snapshot schedules for a specific global, pool or image level, with an optional pool or image name: Syntax Additionally, the `--recursive option can be specified to list all schedules at the specified level as shown below: Example Additional Resources See the Mirroring Ceph block devices section in the Red Hat Ceph Storage Block Device Guide for details. 5.7.19. Removing a mirror-snapshot schedule Remove a mirror-snapshot schedule. Prerequisites A minimum of two healthy running Red Hat Ceph Storage clusters. Root-level access to the Ceph client nodes for the Red Hat Ceph Storage clusters. A CephX user with administrator-level capabilities. Access to the Red Hat Ceph Storage cluster where a snapshot mirror will be created. Procedure To remove a mirror-snapshot schedule: Syntax The interval can be specified in days, hours, or minutes using d, h, m suffix respectively. The optional START_TIME can be specified using the ISO 8601 time format. Example Example Additional Resources See the Mirroring Ceph block devices section in the Red Hat Ceph Storage Block Device Guide for details. 5.7.20. Viewing the status for the snapshots to be created View the status for the snapshots to be created for snapshot-based mirroring RBD images. Prerequisites A minimum of two healthy running Red Hat Ceph Storage clusters. Root-level access to the Ceph client nodes for the Red Hat Ceph Storage clusters. A CephX user with administrator-level capabilities. Access to the Red Hat Ceph Storage cluster where a snapshot mirror will be created. Procedure To view the status for the snapshots to be created: Syntax Example Additional Resources See the Mirroring Ceph block devices section in the Red Hat Ceph Storage Block Device Guide for details. 5.8. Recover from a disaster As a storage administrator, you can be prepared for eventual hardware failure by knowing how to recover the data from another storage cluster where mirroring was configured. In the examples, the primary storage cluster is known as the site-a , and the secondary storage cluster is known as the site-b . Additionally, the storage clusters both have a data pool with two images, image1 and image2 . 5.8.1. Prerequisites A running Red Hat Ceph Storage cluster. One-way or two-way mirroring was configured. 5.8.2. Disaster recovery Asynchronous replication of block data between two or more Red Hat Ceph Storage clusters reduces downtime and prevents data loss in the event of a significant data center failure. These failures have a widespread impact, also referred as a large blast radius , and can be caused by impacts to the power grid and natural disasters. Customer data needs to be protected during these scenarios. Volumes must be replicated with consistency and efficiency and also within Recovery Point Objective (RPO) and Recovery Time Objective (RTO) targets. This solution is called a Wide Area Network- Disaster Recovery (WAN-DR). In such scenarios it is hard to restore the primary system and the data center. The quickest way to recover is to failover the applications to an alternate Red Hat Ceph Storage cluster (disaster recovery site) and make the cluster operational with the latest copy of the data available. The solutions that are used to recover from these failure scenarios are guided by the application: Recovery Point Objective (RPO) : The amount of data loss, an application tolerate in the worst case. Recovery Time Objective (RTO) : The time taken to get the application back on line with the latest copy of the data available. Additional Resources See the Mirroring Ceph block devices section in the Red Hat Ceph Storage Block Device Guide for details. See the Encryption in transit section in the Red Hat Ceph Storage Data Security and Hardening Guide to know more about data transmission over the wire in an encrypted state. 5.8.3. Recover from a disaster with one-way mirroring To recover from a disaster when using one-way mirroring use the following procedures. They show how to fail over to the secondary cluster after the primary cluster terminates, and how to fail back. The shutdown can be orderly or non-orderly. Important One-way mirroring supports multiple secondary sites. If you are using additional secondary clusters, choose one of the secondary clusters to fail over to. Synchronize from the same cluster during fail back. 5.8.4. Recover from a disaster with two-way mirroring To recover from a disaster when using two-way mirroring use the following procedures. They show how to fail over to the mirrored data on the secondary cluster after the primary cluster terminates, and how to failback. The shutdown can be orderly or non-orderly. Additional Resources For details on demoting, promoting, and resyncing images, see the Configure mirroring on a image section in the Red Hat Ceph Storage Block Device Guide . 5.8.5. Failover after an orderly shutdown Failover to the secondary storage cluster after an orderly shutdown. Prerequisites Minimum of two running Red Hat Ceph Storage clusters. Root-level access to the node. Pool mirroring or image mirroring configured with one-way mirroring . Procedure Stop all clients that use the primary image. This step depends on which clients use the image. For example, detach volumes from any OpenStack instances that use the image. Demote the primary images located on the site-a cluster by running the following commands on a monitor node in the site-a cluster: Syntax Example Promote the non-primary images located on the site-b cluster by running the following commands on a monitor node in the site-b cluster: Syntax Example After some time, check the status of the images from a monitor node in the site-b cluster. They should show a state of up+stopped and be listed as primary: Resume the access to the images. This step depends on which clients use the image. Additional Resources See the Block Storage and Volumes chapter in the Red Hat OpenStack Platform Storage Guide . 5.8.6. Failover after a non-orderly shutdown Failover to secondary storage cluster after a non-orderly shutdown. Prerequisites Minimum of two running Red Hat Ceph Storage clusters. Root-level access to the node. Pool mirroring or image mirroring configured with one-way mirroring . Procedure Verify that the primary storage cluster is down. Stop all clients that use the primary image. This step depends on which clients use the image. For example, detach volumes from any OpenStack instances that use the image. Promote the non-primary images from a Ceph Monitor node in the site-b storage cluster. Use the --force option, because the demotion cannot be propagated to the site-a storage cluster: Syntax Example Check the status of the images from a Ceph Monitor node in the site-b storage cluster. They should show a state of up+stopping_replay and the description should say force promoted : Example Additional Resources See the Block Storage and Volumes chapter in the Red Hat OpenStack Platform Storage Guide . 5.8.7. Prepare for fail back If two storage clusters were originally configured only for one-way mirroring, in order to fail back, configure the primary storage cluster for mirroring in order to replicate the images in the opposite direction. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure On the client node of the site-a storage cluster, install the rbd-mirror package: Note The package is provided by the Red Hat Ceph Storage Tools repository. On the client node of the site-a storage cluster, specify the storage cluster name by adding the CLUSTER option to the /etc/sysconfig/ceph file: Copy the site-b Ceph configuration file and keyring file from the site-b Ceph Monitor node to the site-a Ceph Monitor and client nodes: Syntax Note The scp commands that transfer the Ceph configuration file from the site-b Ceph Monitor node to the site-a Ceph Monitor and client nodes renames the file to site-a.conf . The keyring file name stays the same. Copy the site-a keyring file from the site-a Ceph Monitor node to the site-a client node: Syntax Enable and start the rbd-mirror daemon on the site-a client node: Syntax Change CLIENT_ID to the Ceph Storage cluster user that the rbd-mirror daemon will use. The user must have the appropriate cephx access to the storage cluster. Example From the client node on the site-a cluster, add the site-b cluster as a peer: Example If you are using multiple secondary storage clusters, only the secondary storage cluster chosen to fail over to, and fail back from, must be added. From a monitor node in the site-a storage cluster, verify the site-b storage cluster was successfully added as a peer: Syntax Example Additional Resources For detailed information, see the User Management chapter in the Red Hat Ceph Storage Administration Guide . 5.8.7.1. Fail back to the primary storage cluster When the formerly primary storage cluster recovers, fail back to the primary storage cluster. Prerequisites Minimum of two running Red Hat Ceph Storage clusters. Root-level access to the node. Pool mirroring or image mirroring configured with one-way mirroring . Procedure Check the status of the images from a monitor node in the site-b cluster again. They should show a state of up-stopped and the description should say local image is primary : Example From a Ceph Monitor node on the site-a storage cluster determine if the images are still primary: Syntax Example In the output from the commands, look for mirroring primary: true or mirroring primary: false , to determine the state. Demote any images that are listed as primary by running a command like the following from a Ceph Monitor node in the site-a storage cluster: Syntax Example Resynchronize the images ONLY if there was a non-orderly shutdown. Run the following commands on a monitor node in the site-a storage cluster to resynchronize the images from site-b to site-a : Syntax Example After some time, ensure resynchronization of the images is complete by verifying they are in the up+replaying state. Check their state by running the following commands on a monitor node in the site-a storage cluster: Syntax Example Demote the images on the site-b storage cluster by running the following commands on a Ceph Monitor node in the site-b storage cluster: Syntax Example Note If there are multiple secondary storage clusters, this only needs to be done from the secondary storage cluster where it was promoted. Promote the formerly primary images located on the site-a storage cluster by running the following commands on a Ceph Monitor node in the site-a storage cluster: Syntax Example Check the status of the images from a Ceph Monitor node in the site-a storage cluster. They should show a status of up+stopped and the description should say local image is primary : Syntax Example 5.8.8. Remove two-way mirroring After fail back is complete, you can remove two-way mirroring and disable the Ceph block device mirroring service. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure Remove the site-b storage cluster as a peer from the site-a storage cluster: Example Stop and disable the rbd-mirror daemon on the site-a client: Syntax Example
[ "rbd create IMAGE_NAME --size MEGABYTES --pool POOL_NAME --image-feature FEATURE[,FEATURE]", "rbd create image1 --size 1024 --pool data --image-feature exclusive-lock,journaling", "rbd feature enable POOL_NAME / IMAGE_NAME FEATURE_NAME", "rbd feature enable data/image1 exclusive-lock,journaling", "rbd_default_features = 125", "ceph auth get-or-create client. CLUSTER_NAME mon 'profile rbd' osd 'profile rbd pool=data' -o /etc/ceph/ CLUSTER_NAME .client. USER_NAME .keyring", "ceph auth get-or-create client.site-a mon 'profile rbd' osd 'profile rbd pool=data' -o /etc/ceph/site-a.client.site-a.keyring", "[rbdmirrors] ceph-client", "cd /usr/share/ceph-ansible", "cp group_vars/rbdmirrors.yml.sample group_vars/rbdmirrors.yml", "ceph_rbd_mirror_configure: true ceph_rbd_mirror_pool: \"data\"", "ceph_rbd_mirror_mode: image", "ceph_rbd_mirror_remote_cluster: \"site-a\"", "ceph_rbd_mirror_remote_user: \"client.site-a\"", "[user@admin ceph-ansible]USD ansible-playbook site.yml --limit rbdmirrors -i hosts", "[ansible@admin ceph-ansible]USD ansible-playbook site-container.yml --limit rbdmirrors -i hosts", "rbd mirror image enable POOL / IMAGE", "rbd mirror image enable POOL / IMAGE snapshot", "rbd mirror image enable data/image1 rbd mirror image enable data/image1 snapshot", "rbd mirror image status data/image1 image1: global_id: 7d486c3f-d5a1-4bee-ae53-6c4f1e0c8eac state: up+replaying 1 description: replaying, master_position=[object_number=3, tag_tid=1, entry_tid=3], mirror_position=[object_number=3, tag_tid=1, entry_tid=3], entries_behind_master=0 last_update: 2019-04-22 13:19:27", "rbd mirror image status data/image1 image1: global_id: 06acc9e6-a63d-4aa1-bd0d-4f3a79b0ae33 state: up+replaying 1 description: replaying, {\"bytes_per_second\":0.0,\"bytes_per_snapshot\":0.0,\"local_snapshot_timestamp\":1642689843,\"remote_snapshot_timestamp\":1642689843,\"replay_state\":\"idle\"} service: admin on ceph-rbd2-vasi-43-5hwia4-node2 last_update: 2022-01-20 12:41:57", "rbd create IMAGE_NAME --size MEGABYTES --pool POOL_NAME --image-feature FEATURE[,FEATURE]", "rbd create image1 --size 1024 --pool data --image-feature exclusive-lock,journaling", "rbd feature enable POOL_NAME / IMAGE_NAME FEATURE_NAME", "rbd feature enable data/image1 exclusive-lock,journaling", "rbd_default_features = 125", "ceph auth get-or-create client. PRIMARY_CLUSTER_NAME mon 'profile rbd' osd 'profile rbd pool=data' -o /etc/ceph/ PRIMARY_CLUSTER_NAME .client. USER_NAME .keyring -c /etc/ceph/ PRIMARY_CLUSTER_NAME .conf", "ceph auth get-or-create client.site-a mon 'profile rbd' osd 'profile rbd pool=data' -o /etc/ceph/site-a.client.site-a.keyring -c /etc/ceph/site-a.conf", "scp /etc/ceph/ PRIMARY_CLUSTER_NAME .client. USER_NAME .keyring root@ SECONDARY_CLIENT_NODE_NAME :/etc/ceph/ PRIMARY_CLUSTER_NAME .client. USER_NAME .keyring", "scp /etc/ceph/site-a.client.site-a.keyring [email protected]:/etc/ceph/site-a.client.site-a.keyring", "scp /etc/ceph/ PRIMARY_CLUSTER_NAME .conf root@ SECONDARY_MONITOR_NODE_NAME :/etc/ceph/ PRIMARY_CLUSTER_NAME .conf scp /etc/ceph/ PRIMARY_CLUSTER_NAME .conf user@ SECONDARY_CLIENT_NODE_NAME :/etc/ceph/ PRIMARY_CLUSTER_NAME .conf", "scp /etc/ceph/site-a.conf [email protected]:/etc/ceph/site-a.conf scp /etc/ceph/site-a.conf [email protected]:/etc/ceph/site-a.conf", "[rbdmirrors] client.site-b", "[root@admin ~]USD cd /usr/share/ceph-ansible", "cp group_vars/rbdmirrors.yml.sample group_vars/rbdmirrors.yml", "ceph_rbd_mirror_configure: true ceph_rbd_mirror_pool: \"data\"", "ceph_rbd_mirror_mode: image", "ceph_rbd_mirror_remote_cluster: \"site-a\"", "ceph_rbd_mirror_remote_user: \"client.site-a\"", "[user@admin ceph-ansible]USD ansible-playbook site.yml --limit rbdmirrors -i hosts", "[user@admin ceph-ansible]USD ansible-playbook site-container.yml --limit rbdmirrors -i hosts", "rbd mirror image status data/image1 image1: global_id: 7d486c3f-d5a1-4bee-ae53-6c4f1e0c8eac state: up+replaying 1 description: replaying, master_position=[object_number=3, tag_tid=1, entry_tid=3], mirror_position=[object_number=3, tag_tid=1, entry_tid=3], entries_behind_master=0 last_update: 2021-04-22 13:19:27", "rbd mirror image status data/image1 image1: global_id: 06acc9e6-a63d-4aa1-bd0d-4f3a79b0ae33 state: up+replaying 1 description: replaying, {\"bytes_per_second\":0.0,\"bytes_per_snapshot\":0.0,\"local_snapshot_timestamp\":1642689843,\"remote_snapshot_timestamp\":1642689843,\"replay_state\":\"idle\"} service: admin on ceph-rbd2-vasi-43-5hwia4-node2 last_update: 2022-01-20 12:41:57", "ceph auth get-or-create client. SECONDARY_CLUSTER_NAME mon 'profile rbd' osd 'profile rbd pool=data' -o /etc/ceph/ SECONDARY_CLUSTER_NAME .client. USER_NAME .keyring -c /etc/ceph/ SECONDARY_CLUSTER_NAME .conf", "ceph auth get-or-create client.site-b mon 'profile rbd' osd 'profile rbd pool=data' -o /etc/ceph/site-b.client.site-b.keyring -c /etc/ceph/site-b.conf", "scp /etc/ceph/ SECONDARY_CLUSTER_NAME .client. USER_NAME .keyring root@ PRIMARY_CLIENT_NODE_NAME :/etc/ceph/ SECONDARY_CLUSTER_NAME .client. USER_NAME .keyring", "scp /etc/ceph/site-b.client.site-b.keyring [email protected]:/etc/ceph/site-b.client.site-b.keyring", "scp /etc/ceph/ SECONDARY_CLUSTER_NAME .conf root@ PRIMARY_MONITOR_NODE_NAME :/etc/ceph/ SECONDARY_CLUSTER_NAME .conf scp /etc/ceph/ SECONDARY_CLUSTER_NAME .conf user@ PRIMARY_CLIENT_NODE_NAME :/etc/ceph/ SECONDARY_CLUSTER_NAME .conf", "scp /etc/ceph/site-b.conf [email protected]:/etc/ceph/site-b.conf scp /etc/ceph/site-b.conf [email protected]:/etc/ceph/site-b.conf", "[rbdmirrors] client.site-a", "cd /usr/share/ceph-ansible", "cp group_vars/rbdmirrors.yml.sample group_vars/rbdmirrors.yml", "ceph_rbd_mirror_configure: true ceph_rbd_mirror_pool: \"data\"", "ceph_rbd_mirror_mode: image", "ceph_rbd_mirror_remote_cluster: \"site-b\"", "ceph_rbd_mirror_remote_user: \"client.site-b\"", "[user@admin ceph-ansible]USD ansible-playbook site.yml --limit rbdmirrors -i hosts", "[user@admin ceph-ansible]USD ansible-playbook site-container.yml --limit rbdmirrors -i hosts", "rbd mirror image enable POOL / IMAGE", "rbd mirror image enable POOL / IMAGE snapshot", "rbd mirror image enable data/image1 rbd mirror image enable data/image1 snapshot", "rbd mirror image status data/image1 image1: global_id: 08027096-d267-47f8-b52e-59de1353a034 state: up+stopped 1 description: local image is primary last_update: 2021-04-16 15:45:31", "rbd mirror image status data/image1 image1: global_id: 47fd1aae-5f19-4193-a5df-562b5c644ea7 state: up+stopped 1 description: local image is primary service: admin on ceph-rbd1-vasi-43-5hwia4-node2 last_update: 2022-01-20 12:42:54 peer_sites: name: rbd-mirror.site-b state: up+replaying description: replaying, {\"bytes_per_second\":0.0,\"bytes_per_snapshot\":0.0,\"local_snapshot_timestamp\":1642693094,\"remote_snapshot_timestamp\":1642693094,\"replay_state\":\"idle\"} last_update: 2022-01-20 12:42:59 snapshots: 5 .mirror.primary.47fd1aae-5f19-4193-a5df-562b5c644ea7.dda146c6-5f21-4e75-ba93-660f6e57e301 (peer_uuids:[bfd09289-c9c9-40c8-b2d3-ead9b6a99a45])", "yum install rbd-mirror", "dnf install rbd-mirror", "rbd create IMAGE_NAME --size MEGABYTES --pool POOL_NAME --image-feature FEATURE [,FEATURE]", "rbd create image1 --size 1024 --pool data --image-feature exclusive-lock,journaling", "rbd feature enable POOL_NAME / IMAGE_NAME FEATURE [, FEATURE ]", "rbd feature enable data/image1 exclusive-lock,journaling", "rbd_default_features = 125", "rbd mirror pool enable POOL_NAME MODE", "rbd mirror pool enable data pool", "rbd mirror pool enable POOL_NAME MODE", "rbd mirror pool enable data image", "rbd mirror pool info POOL_NAME", "rbd mirror pool info data Mode: image Site Name: 94cbd9ca-7f9a-441a-ad4b-52a33f9b7148 Peer Sites: none", "ceph auth get-or-create client. PRIMARY_CLUSTER_NAME mon 'profile rbd-mirror' osd 'profile rbd' -o /etc/ceph/ceph. PRIMARY_CLUSTER_NAME .keyring", "ceph auth get-or-create client.rbd-mirror.site-a mon 'profile rbd-mirror' osd 'profile rbd' -o /etc/ceph/ceph.client.rbd-mirror.site-a.keyring", "scp /etc/ceph/ceph. PRIMARY_CLUSTER_NAME .keyring root@ SECONDARY_CLUSTER :_PATH_", "scp /etc/ceph/ceph.client.rbd-mirror.site-a.keyring root@rbd-client-site-b:/etc/ceph/", "rbd mirror pool peer bootstrap create --site-name LOCAL_SITE_NAME POOL_NAME > PATH_TO_BOOTSTRAP_TOKEN", "rbd mirror pool peer bootstrap create --site-name rbd-mirror.site-a data > /root/bootstrap_token_rbd-mirror.site-a", "scp PATH_TO_BOOTSTRAP_TOKEN root@ SECONDARY_CLUSTER :/root/", "scp /root/bootstrap_token_site-a root@ceph-rbd2:/root/", "ceph auth get-or-create client. SECONDARY_CLUSTER_NAME mon 'profile rbd-mirror' osd 'profile rbd' -o /etc/ceph/ceph. SECONDARY_CLUSTER_NAME .keyring", "ceph auth get-or-create client.rbd-mirror.site-b mon 'profile rbd-mirror' osd 'profile rbd' -o /etc/ceph/ceph.client.rbd-mirror.site-b.keyring", "scp /etc/ceph/ceph. SECONDARY_CLUSTER_NAME .keyring root@ PRIMARY_CLUSTER :_PATH_", "scp /etc/ceph/ceph.client.rbd-mirror.site-b.keyring root@rbd-client-site-a:/etc/ceph/", "rbd mirror pool peer bootstrap import --site-name LOCAL_SITE_NAME --direction rx-only POOL_NAME PATH_TO_BOOTSTRAP_TOKEN", "rbd mirror pool peer bootstrap import --site-name rbd-mirror.site-b --direction rx-only data /root/bootstrap_token_rbd-mirror.site-a", "systemctl enable ceph-rbd-mirror.target systemctl enable ceph-rbd-mirror@rbd-mirror. CLIENT_ID systemctl start ceph-rbd-mirror@rbd-mirror. CLIENT_ID", "systemctl enable ceph-rbd-mirror.target systemctl enable [email protected] systemctl start [email protected]", "rbd mirror image status POOL_NAME / IMAGE_NAME", "rbd mirror image status data/image1 image1: global_id: 08027096-d267-47f8-b52e-59de1353a034 state: up+stopped 1 description: local image is primary last_update: 2021-04-22 13:45:31", "rbd mirror image status data/image1 image1: global_id: 47fd1aae-5f19-4193-a5df-562b5c644ea7 state: up+stopped 1 description: local image is primary service: admin on ceph-rbd1-vasi-43-5hwia4-node2 last_update: 2022-01-20 12:42:54 peer_sites: name: rbd-mirror.site-b state: up+replaying description: replaying, {\"bytes_per_second\":0.0,\"bytes_per_snapshot\":0.0,\"local_snapshot_timestamp\":1642693094,\"remote_snapshot_timestamp\":1642693094,\"replay_state\":\"idle\"} last_update: 2022-01-20 12:42:59 snapshots: 5 .mirror.primary.47fd1aae-5f19-4193-a5df-562b5c644ea7.dda146c6-5f21-4e75-ba93-660f6e57e301 (peer_uuids:[bfd09289-c9c9-40c8-b2d3-ead9b6a99a45])", "rbd mirror image status data/image1 image1: global_id: 7d486c3f-d5a1-4bee-ae53-6c4f1e0c8eac state: up+replaying 1 description: replaying, master_position=[object_number=3, tag_tid=1, entry_tid=3], mirror_position=[object_number=3, tag_tid=1, entry_tid=3], entries_behind_master=0 last_update: 2021-04-22 14:19:27", "rbd mirror image status data/image1 image1: global_id: 06acc9e6-a63d-4aa1-bd0d-4f3a79b0ae33 state: up+replaying 1 description: replaying, {\"bytes_per_second\":0.0,\"bytes_per_snapshot\":0.0,\"local_snapshot_timestamp\":1642689843,\"remote_snapshot_timestamp\":1642689843,\"replay_state\":\"idle\"} service: admin on ceph-rbd2-vasi-43-5hwia4-node2 last_update: 2022-01-20 12:41:57", "yum install rbd-mirror", "dnf install rbd-mirror", "rbd create IMAGE_NAME --size MEGABYTES --pool POOL_NAME --image-feature FEATURE [,FEATURE]", "rbd create image1 --size 1024 --pool data --image-feature exclusive-lock,journaling", "rbd feature enable POOL_NAME / IMAGE_NAME FEATURE [, FEATURE ]", "rbd feature enable data/image1 exclusive-lock,journaling", "rbd_default_features = 125", "rbd mirror pool enable POOL_NAME MODE", "rbd mirror pool enable data pool", "rbd mirror pool enable POOL_NAME MODE", "rbd mirror pool enable data image", "rbd mirror pool info POOL_NAME", "rbd mirror pool info data Mode: image Site Name: 94cbd9ca-7f9a-441a-ad4b-52a33f9b7148 Peer Sites: none", "ceph auth get-or-create client. PRIMARY_CLUSTER_NAME mon 'profile rbd-mirror' osd 'profile rbd' -o /etc/ceph/ceph. PRIMARY_CLUSTER_NAME .keyring", "ceph auth get-or-create client.rbd-mirror.site-a mon 'profile rbd-mirror' osd 'profile rbd' -o /etc/ceph/ceph.client.rbd-mirror.site-a.keyring", "scp /etc/ceph/ceph. PRIMARY_CLUSTER_NAME .keyring root@ SECONDARY_CLUSTER :_PATH_", "scp /etc/ceph/ceph.client.rbd-mirror.site-a.keyring root@rbd-client-site-b:/etc/ceph/", "rbd mirror pool peer bootstrap create --site-name LOCAL_SITE_NAME POOL_NAME > PATH_TO_BOOTSTRAP_TOKEN", "rbd mirror pool peer bootstrap create --site-name rbd-mirror.site-a data > /root/bootstrap_token_rbd-mirror.site-a", "scp PATH_TO_BOOTSTRAP_TOKEN root@ SECONDARY_CLUSTER :/root/", "scp /root/bootstrap_token_site-a root@ceph-rbd2:/root/", "ceph auth get-or-create client. SECONDARY_CLUSTER_NAME mon 'profile rbd-mirror' osd 'profile rbd' -o /etc/ceph/ceph. SECONDARY_CLUSTER_NAME .keyring", "ceph auth get-or-create client.rbd-mirror.site-b mon 'profile rbd-mirror' osd 'profile rbd' -o /etc/ceph/ceph.client.rbd-mirror.site-b.keyring", "scp /etc/ceph/ceph. SECONDARY_CLUSTER_NAME .keyring root@ PRIMARY_CLUSTER :_PATH_", "scp /etc/ceph/ceph.client.rbd-mirror.site-b.keyring root@rbd-client-site-a:/etc/ceph/", "rbd mirror pool peer bootstrap import --site-name LOCAL_SITE_NAME --direction rx-tx POOL_NAME PATH_TO_BOOTSTRAP_TOKEN", "rbd mirror pool peer bootstrap import --site-name rbd-mirror.site-b --direction rx-tx data /root/bootstrap_token_rbd-mirror.site-a", "systemctl enable ceph-rbd-mirror.target systemctl enable ceph-rbd-mirror@rbd-mirror. CLIENT_ID systemctl start ceph-rbd-mirror@rbd-mirror. CLIENT_ID", "systemctl enable ceph-rbd-mirror.target systemctl enable [email protected] systemctl start [email protected] systemctl enable [email protected] systemctl start [email protected]", "systemctl enable ceph-rbd-mirror.target systemctl enable [email protected] systemctl start [email protected] systemctl enable [email protected] systemctl start [email protected]", "rbd mirror image status POOL_NAME / IMAGE_NAME", "rbd mirror image status data/image1 image1: global_id: 08027096-d267-47f8-b52e-59de1353a034 state: up+stopped 1 description: local image is primary last_update: 2021-04-22 13:45:31", "rbd mirror image status data/image1 image1: global_id: 47fd1aae-5f19-4193-a5df-562b5c644ea7 state: up+stopped 1 description: local image is primary service: admin on ceph-rbd1-vasi-43-5hwia4-node2 last_update: 2022-01-20 12:42:54 peer_sites: name: rbd-mirror.site-b state: up+replaying description: replaying, {\"bytes_per_second\":0.0,\"bytes_per_snapshot\":0.0,\"local_snapshot_timestamp\":1642693094,\"remote_snapshot_timestamp\":1642693094,\"replay_state\":\"idle\"} last_update: 2022-01-20 12:42:59 snapshots: 5 .mirror.primary.47fd1aae-5f19-4193-a5df-562b5c644ea7.dda146c6-5f21-4e75-ba93-660f6e57e301 (peer_uuids:[bfd09289-c9c9-40c8-b2d3-ead9b6a99a45])", "rbd mirror image status data/image1 image1: global_id: 7d486c3f-d5a1-4bee-ae53-6c4f1e0c8eac state: up+replaying 1 description: replaying, master_position=[object_number=3, tag_tid=1, entry_tid=3], mirror_position=[object_number=3, tag_tid=1, entry_tid=3], entries_behind_master=0 last_update: 2021-04-22 14:19:27", "rbd mirror image status data/image1 image1: global_id: 06acc9e6-a63d-4aa1-bd0d-4f3a79b0ae33 state: up+replaying 1 description: replaying, {\"bytes_per_second\":0.0,\"bytes_per_snapshot\":0.0,\"local_snapshot_timestamp\":1642689843,\"remote_snapshot_timestamp\":1642689843,\"replay_state\":\"idle\"} service: admin on ceph-rbd2-vasi-43-5hwia4-node2 last_update: 2022-01-20 12:41:57", "rbd mirror pool info POOL_NAME", "rbd mirror pool info data Mode: pool Site Name: site-a Peer Sites: UUID: 950ddadf-f995-47b7-9416-b9bb233f66e3 Name: site-b Mirror UUID: 4696cd9d-1466-4f98-a97a-3748b6b722b3 Direction: rx-tx Client: client.site-b", "rbd mirror pool enable POOL_NAME MODE", "rbd mirror pool enable data pool", "rbd mirror pool enable data image", "rbd mirror pool disable POOL_NAME", "rbd mirror pool disable data", "rbd mirror image enable POOL_NAME / IMAGE_NAME", "rbd mirror image enable data/image2", "rbd mirror image disable POOL_NAME / IMAGE_NAME", "rbd mirror image disable data/image2", "rbd mirror image demote POOL_NAME / IMAGE_NAME", "rbd mirror image demote data/image2", "rbd mirror image promote POOL_NAME / IMAGE_NAME", "rbd mirror image promote data/image2", "rbd mirror image promote --force POOL_NAME / IMAGE_NAME", "rbd mirror image promote --force data/image2", "rbd mirror image resync POOL_NAME / IMAGE_NAME", "rbd mirror image resync data/image2", "rbd --cluster CLUSTER_NAME mirror pool peer add POOL_NAME PEER_CLIENT_NAME @ PEER_CLUSTER_NAME -n CLIENT_NAME", "rbd --cluster site-b mirror pool peer add data client.site-a@site-a -n client.site-b", "rbd mirror pool peer remove POOL_NAME PEER_UUID", "rbd mirror pool peer remove data 7e90b4ce-e36d-4f07-8cbc-42050896825d", "rbd mirror pool status POOL_NAME", "rbd mirror pool status data health: OK images: 1 total", "rbd mirror image status POOL_NAME / IMAGE_NAME", "rbd mirror image status data/image2 image2: global_id: 703c4082-100d-44be-a54a-52e6052435a5 state: up+replaying description: replaying, master_position=[object_number=0, tag_tid=3, entry_tid=0], mirror_position=[object_number=0, tag_tid=3, entry_tid=0], entries_behind_master=0 last_update: 2019-04-23 13:39:15", "rbd image-meta set POOL_NAME / IMAGE_NAME conf_rbd_mirroring_replay_delay MINIMUM_DELAY_IN_SECONDS", "rbd image-meta set vms/vm-1 conf_rbd_mirroring_replay_delay 600", "rbd --cluster CLUSTER_NAME mirror image snapshot POOL_NAME / IMAGE_NAME", "root@rbd-client ~]# rbd --cluster site-a mirror image snapshot data/image1", "rbd mirror snapshot schedule add --pool POOL_NAME --image IMAGE_NAME INTERVAL START_TIME", "rbd mirror snapshot schedule add --pool data --image image1 6h", "rbd mirror snapshot schedule add --pool data 24h 14:00:00-05:00", "rbd mirror snapshot schedule add 48h", "rbd --cluster site-a mirror snapshot schedule ls --pool POOL_NAME --recursive", "rbd --cluster site-a mirror snapshot schedule ls --pool data --recursive POOL NAMESPACE IMAGE SCHEDULE data - - every 1d starting at 14:00:00-05:00 data - image1 every 6h", "rbd --cluster CLUSTER_NAME mirror snapshot schedule remove POOL_NAME / IMAGE_NAME INTERVAL START_TIME", "rbd --cluster site-a mirror snapshot schedule remove data/image1 6h", "rbd --cluster site-a mirror snapshot schedule remove data/image1 24h 14:00:00-05:00", "rbd --cluster site-a mirror snapshot schedule status POOL_NAME / IMAGE_NAME", "rbd --cluster site-a mirror snapshot schedule status SCHEDULE TIME IMAGE 2020-02-26 18:00:00 data/image1", "rbd mirror image demote POOL_NAME / IMAGE_NAME", "rbd mirror image demote data/image1 rbd mirror image demote data/image2", "rbd mirror image promote POOL_NAME / IMAGE_NAME", "rbd mirror image promote data/image1 rbd mirror image promote data/image2", "rbd mirror image status data/image1 image1: global_id: 08027096-d267-47f8-b52e-59de1353a034 state: up+stopped description: local image is primary last_update: 2019-04-17 16:04:37 rbd mirror image status data/image2 image2: global_id: 596f41bc-874b-4cd4-aefe-4929578cc834 state: up+stopped description: local image is primary last_update: 2019-04-17 16:04:37", "rbd mirror image promote --force POOL_NAME / IMAGE_NAME", "rbd mirror image promote --force data/image1 rbd mirror image promote --force data/image2", "rbd mirror image status data/image1 image1: global_id: 08027096-d267-47f8-b52e-59de1353a034 state: up+stopping_replay description: force promoted last_update: 2019-04-17 13:25:06 rbd mirror image status data/image2 image2: global_id: 596f41bc-874b-4cd4-aefe-4929578cc834 state: up+stopping_replay description: force promoted last_update: 2019-04-17 13:25:06", "yum install rbd-mirror", "CLUSTER=site-b", "scp /etc/ceph/ceph.conf USER @ SITE_A_MON_NODE_NAME :/etc/ceph/site-b.conf scp /etc/ceph/site-b.client.site-b.keyring root@ SITE_A_MON_NODE_NAME :/etc/ceph/ scp /etc/ceph/ceph.conf user@ SITE_A_CLIENT_NODE_NAME :/etc/ceph/site-b.conf scp /etc/ceph/site-b.client.site-b.keyring user@ SITE_A_CLIENT_NODE_NAME :/etc/ceph/", "scp /etc/ceph/site-a.client.site-a.keyring <user>@ SITE_A_CLIENT_HOST_NAME :/etc/ceph/", "systemctl enable ceph-rbd-mirror.target systemctl enable ceph-rbd-mirror@ CLIENT_ID systemctl start ceph-rbd-mirror@ CLIENT_ID", "systemctl enable ceph-rbd-mirror.target systemctl enable ceph-rbd-mirror@site-a systemctl start ceph-rbd-mirror@site-a", "rbd --cluster site-a mirror pool peer add data client.site-b@site-b -n client.site-a", "rbd mirror pool info POOL_NAME", "rbd mirror pool info data Mode: image Site Name: site-a Peer Sites: UUID: 950ddadf-f995-47b7-9416-b9bb233f66e3 Name: site-b Mirror UUID: 4696cd9d-1466-4f98-a97a-3748b6b722b3 Direction: rx-tx Client: client.site-b", "rbd mirror image status data/image1 image1: global_id: 08027096-d267-47f8-b52e-59de1353a034 state: up+stopped description: local image is primary last_update: 2019-04-22 17:37:48 rbd mirror image status data/image2 image2: global_id: 08027096-d267-47f8-b52e-59de1353a034 state: up+stopped description: local image is primary last_update: 2019-04-22 17:38:18", "rbd info POOL_NAME / IMAGE_NAME", "rbd info data/image1 rbd info data/image2", "rbd mirror image demote POOL_NAME / IMAGE_NAME", "rbd mirror image demote data/image1", "rbd mirror image resync POOL_NAME / IMAGE_NAME", "rbd mirror image resync data/image1 Flagged image for resync from primary rbd mirror image resync data/image2 Flagged image for resync from primary", "rbd mirror image status POOL_NAME / IMAGE_NAME", "rbd mirror image status data/image1 rbd mirror image status data/image2", "rbd mirror image demote POOL_NAME / IMAGE_NAME", "rbd mirror image demote data/image1 rbd mirror image demote data/image2", "rbd mirror image promote POOL_NAME / IMAGE_NAME", "rbd mirror image promote data/image1 rbd mirror image promote data/image2", "rbd mirror image status POOL_NAME / IMAGE_NAME", "rbd mirror image status data/image1 image1: global_id: 08027096-d267-47f8-b52e-59de1353a034 state: up+stopped description: local image is primary last_update: 2019-04-22 11:14:51 rbd mirror image status data/image2 image2: global_id: 596f41bc-874b-4cd4-aefe-4929578cc834 state: up+stopped description: local image is primary last_update: 2019-04-22 11:14:51", "rbd mirror pool peer remove data client.remote@remote --cluster local rbd --cluster site-a mirror pool peer remove data client.site-b@site-b -n client.site-a", "systemctl stop ceph-rbd-mirror@ CLIENT_ID systemctl disable ceph-rbd-mirror@ CLIENT_ID systemctl disable ceph-rbd-mirror.target", "systemctl stop ceph-rbd-mirror@site-a systemctl disable ceph-rbd-mirror@site-a systemctl disable ceph-rbd-mirror.target" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/block_device_guide/mirroring-ceph-block-devices
Chapter 38. Unregistering from Red Hat Subscription Management Services
Chapter 38. Unregistering from Red Hat Subscription Management Services A system can only be registered with one subscription service. If you need to change which service your system is registered with or need to delete the registration in general, then the method to unregister depends on which type of subscription service the system was originally registered with. 38.1. Systems Registered with Red Hat Subscription Management Several different subscription services use the same, certificate-based framework to identify systems, installed products, and attached subscriptions. These services are Customer Portal Subscription Management (hosted), Subscription Asset Manager (on-premise subscription service), and CloudForms System Engine (on-premise subscription and content delivery services). These are all part of Red Hat Subscription Management . For all services within Red Hat Subscription Management, the systems are managed with the Red Hat Subscription Manager client tools. To unregister a system registered with a Red Hat Subscription Management server, use the unregister command. Note This command must be run as root.
[ "subscription-manager unregister --username=name" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/ch-deregister_RHN_entitlement
11.10. Replacing Hosts
11.10. Replacing Hosts Before replacing hosts ensure that the new peer has the exact disk capacity as that of the one it is replacing. For example, if the peer in the cluster has two 100GB drives, then the new peer must have the same disk capacity and number of drives. Also, steps described in this section can be performed on other volumes types as well, refer to Section 11.9, "Migrating Volumes" when performing replace and reset operations on the volumes. 11.10.1. Replacing a Host Machine with a Different Hostname You can replace a failed host machine with another host that has a different hostname. In the following example the original machine which has had an irrecoverable failure is server0.example.com and the replacement machine is server5.example.com . The brick with an unrecoverable failure is server0.example.com:/rhgs/brick1 and the replacement brick is server5.example.com:/rhgs/brick1 . Stop the geo-replication session if configured by executing the following command. Probe the new peer from one of the existing peers to bring it into the cluster. Ensure that the new brick (server5.example.com:/rhgs/brick1) that is replacing the old brick (server0.example.com:/rhgs/brick1) is empty. If the geo-replication session is configured, perform the following steps: Setup the geo-replication session by generating the ssh keys: Create geo-replication session again with force option to distribute the keys from new nodes to Slave nodes. After successfully setting up the shared storage volume, when a new node is replaced in the cluster, the shared storage is not mounted automatically on this node. Neither is the /etc/fstab entry added for the shared storage on this node. To make use of shared storage on this node, execute the following commands: Note With the release of 3.5 Batch Update 3, the mount point of shared storage is changed from /var/run/gluster/ to /run/gluster/ . For more information on setting up shared storage volume, see Section 11.12, "Setting up Shared Storage Volume" . Configure the meta-volume for geo-replication: For more information on configuring meta-volume, see Section 10.3.5, "Configuring a Meta-Volume" . Retrieve the brick paths in server0.example.com using the following command: Brick path in server0.example.com is /rhgs/brick1 . This has to be replaced with the brick in the newly added host, server5.example.com . Create the required brick path in server5.example.com.For example, if /rhs/brick is the XFS mount point in server5.example.com, then create a brick directory in that path. Execute the replace-brick command with the force option: Verify that the new brick is online. Initiate self-heal on the volume. The status of the heal process can be seen by executing the command: The status of the heal process can be seen by executing the command: Detach the original machine from the trusted pool. Ensure that after the self-heal completes, the extended attributes are set to zero on the other bricks in the replica. In this example, the extended attributes trusted.afr.vol-client-0 and trusted.afr.vol-client-1 have zero values. This means that the data on the two bricks is identical. If these attributes are not zero after self-heal is completed, the data has not been synchronised correctly. Start the geo-replication session using force option: 11.10.2. Replacing a Host Machine with the Same Hostname You can replace a failed host with another node having the same FQDN (Fully Qualified Domain Name). A host in a Red Hat Gluster Storage Trusted Storage Pool has its own identity called the UUID generated by the glusterFS Management Daemon.The UUID for the host is available in /var/lib/glusterd/glusterd.info file. In the following example, the host with the FQDN as server0.example.com was irrecoverable and must to be replaced with a host, having the same FQDN. The following steps have to be performed on the new host. Stop the geo-replication session if configured by executing the following command. Stop the glusterd service on the server0.example.com. On RHEL 7 and RHEL 8, run On RHEL 6, run Important Red Hat Gluster Storage is not supported on Red Hat Enterprise Linux 6 (RHEL 6) from 3.5 Batch Update 1 onwards. See Version Details table in section Red Hat Gluster Storage Software Components and Versions of the Installation Guide Retrieve the UUID of the failed host (server0.example.com) from another of the Red Hat Gluster Storage Trusted Storage Pool by executing the following command: Note that the UUID of the failed host is b5ab2ec3-5411-45fa-a30f-43bd04caf96b Edit the glusterd.info file in the new host and include the UUID of the host you retrieved in the step. Note The operating version of this node must be same as in other nodes of the trusted storage pool. Select any host (say for example, server1.example.com) in the Red Hat Gluster Storage Trusted Storage Pool and retrieve its UUID from the glusterd.info file. Gather the peer information files from the host (server1.example.com) in the step. Execute the following command in that host (server1.example.com) of the cluster. Remove the peer file corresponding to the failed host (server0.example.com) from the /tmp/peers directory. Note that the UUID corresponds to the UUID of the failed host (server0.example.com) retrieved in Step 3. Archive all the files and copy those to the failed host(server0.example.com). Copy the above created file to the new peer. Copy the extracted content to the /var/lib/glusterd/peers directory. Execute the following command in the newly added host with the same name (server0.example.com) and IP Address. Select any other host in the cluster other than the node (server1.example.com) selected in step 5. Copy the peer file corresponding to the UUID of the host retrieved in Step 5 to the new host (server0.example.com) by executing the following command: Start the glusterd service. If new brick has same hostname and same path, refer to Section 11.9.5, "Reconfiguring a Brick in a Volume" , and if it has different hostname and different brick path for replicated volumes then, refer to Section 11.9.2, "Replacing an Old Brick with a New Brick on a Replicate or Distribute-replicate Volume" . In case of disperse volumes, when a new brick has different hostname and different brick path then, refer to Section 11.9.4, "Replacing an Old Brick with a New Brick on a Dispersed or Distributed-dispersed Volume" . Perform the self-heal operation on the restored volume. You can view the gluster volume self-heal status by executing the following command: If the geo-replication session is configured, perform the following steps: Setup the geo-replication session by generating the ssh keys: Create geo-replication session again with force option to distribute the keys from new nodes to Slave nodes. After successfully setting up the shared storage volume, when a new node is replaced in the cluster, the shared storage is not mounted automatically on this node. Neither is the /etc/fstab entry added for the shared storage on this node. To make use of shared storage on this node, execute the following commands: Note With the release of 3.5 Batch Update 3, the mount point of shared storage is changed from /var/run/gluster/ to /run/gluster/ . For more information on setting up shared storage volume, see Section 11.12, "Setting up Shared Storage Volume" . Configure the meta-volume for geo-replication: Start the geo-replication session using force option: Replacing a host with the same Hostname in a two-node Red Hat Gluster Storage Trusted Storage Pool If there are only 2 hosts in the Red Hat Gluster Storage Trusted Storage Pool where the host server0.example.com must be replaced, perform the following steps: Stop the geo-replication session if configured by executing the following command: Stop the glusterd service on server0.example.com. On RHEL 7 and RHEL 8, run On RHEL 6, run Important Red Hat Gluster Storage is not supported on Red Hat Enterprise Linux 6 (RHEL 6) from 3.5 Batch Update 1 onwards. See Version Details table in section Red Hat Gluster Storage Software Components and Versions of the Installation Guide Retrieve the UUID of the failed host (server0.example.com) from another peer in the Red Hat Gluster Storage Trusted Storage Pool by executing the following command: Note that the UUID of the failed host is b5ab2ec3-5411-45fa-a30f-43bd04caf96b Edit the glusterd.info file in the new host (server0.example.com) and include the UUID of the host you retrieved in the step. Note The operating version of this node must be same as in other nodes of the trusted storage pool. Create the peer file in the newly created host (server0.example.com) in /var/lib/glusterd/peers/<uuid-of-other-peer> with the name of the UUID of the other host (server1.example.com). UUID of the host can be obtained with the following: Example 11.6. Example to obtain the UUID of a host In this case the UUID of other peer is 1d9677dc-6159-405e-9319-ad85ec030880 Create a file /var/lib/glusterd/peers/1d9677dc-6159-405e-9319-ad85ec030880 in server0.example.com, with the following command: The file you create must contain the following information: Continue to perform steps 12 to 18 as documented in the procedure.
[ "gluster volume geo-replication MASTER_VOL SLAVE_HOST :: SLAVE_VOL stop force", "gluster peer probe server5.example.com", "gluster system:: execute gsec_create", "gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL create push-pem force", "mount -t glusterfs local node's ip :gluster_shared_storage /var/run/gluster/shared_storage cp /etc/fstab /var/run/gluster/fstab.tmp echo local node's ip :/gluster_shared_storage /var/run/gluster/shared_storage/ glusterfs defaults 0 0\" >> /etc/fstab", "gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL config use_meta_volume true", "gluster volume info <VOLNAME>", "Volume Name: vol Type: Replicate Volume ID: 0xde822e25ebd049ea83bfaa3c4be2b440 Status: Started Snap Volume: no Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: server0.example.com:/rhgs/brick1 Brick2: server1.example.com:/rhgs/brick1 Options Reconfigured: cluster.granular-entry-heal: on performance.readdir-ahead: on snap-max-hard-limit: 256 snap-max-soft-limit: 90 auto-delete: disable", "mkdir /rhgs/brick1", "gluster volume replace-brick vol server0.example.com:/rhgs/brick1 server5.example.com:/rhgs/brick1 commit force volume replace-brick: success: replace-brick commit successful", "gluster volume status Status of volume: vol Gluster process Port Online Pid Brick server5.example.com:/rhgs/brick1 49156 Y 5731 Brick server1.example.com:/rhgs/brick1 49153 Y 5354", "gluster volume heal VOLNAME", "gluster volume heal VOLNAME info", "gluster peer detach (server) All clients mounted through the peer which is getting detached need to be remounted, using one of the other active peers in the trusted storage pool, this ensures that the client gets notification on any changes done on the gluster configuration and if the same has been done do you want to proceed? (y/n) y peer detach: success", "getfattr -d -m. -e hex /rhgs/brick1 getfattr: Removing leading '/' from absolute path names #file: rhgs/brick1 security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a66696c655f743a733000 trusted.afr.vol-client-0=0x000000000000000000000000 trusted.afr.vol-client-1=0x000000000000000000000000 trusted.gfid=0x00000000000000000000000000000001 trusted.glusterfs.dht=0x0000000100000000000000007ffffffe trusted.glusterfs.volume-id=0xde822e25ebd049ea83bfaa3c4be2b440", "gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start force", "gluster volume geo-replication MASTER_VOL SLAVE_HOST :: SLAVE_VOL stop force", "systemctl stop glusterd", "service glusterd stop", "gluster peer status Number of Peers: 2 Hostname: server1.example.com Uuid: 1d9677dc-6159-405e-9319-ad85ec030880 State: Peer in Cluster (Connected) Hostname: server0.example.com Uuid: b5ab2ec3-5411-45fa-a30f-43bd04caf96b State: Peer Rejected (Connected)", "cat /var/lib/glusterd/glusterd.info UUID=b5ab2ec3-5411-45fa-a30f-43bd04caf96b operating-version=30703", "grep -i uuid /var/lib/glusterd/glusterd.info UUID=8cc6377d-0153-4540-b965-a4015494461c", "cp -a /var/lib/glusterd/peers /tmp/", "rm /tmp/peers/b5ab2ec3-5411-45fa-a30f-43bd04caf96b", "cd /tmp; tar -cvf peers.tar peers", "scp /tmp/peers.tar [email protected]:/tmp", "tar -xvf /tmp/peers.tar # cp peers/* /var/lib/glusterd/peers/", "scp /var/lib/glusterd/peers/<UUID-retrieved-from-step5> root@Example1:/var/lib/glusterd/peers/", "systemctl start glusterd", "gluster volume heal VOLNAME", "gluster volume heal VOLNAME info", "gluster system:: execute gsec_create", "gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL create push-pem force", "mount -t glusterfs <local node's ip>:gluster_shared_storage /var/run/gluster/shared_storage # cp /etc/fstab /var/run/gluster/fstab.tmp # echo \"<local node's ip>:/gluster_shared_storage /var/run/gluster/shared_storage/ glusterfs defaults 0 0\" >> /etc/fstab", "gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL config use_meta_volume true", "gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start force", "gluster volume geo-replication MASTER_VOL SLAVE_HOST :: SLAVE_VOL stop force", "systemctl stop glusterd", "service glusterd stop", "gluster peer status Number of Peers: 1 Hostname: server0.example.com Uuid: b5ab2ec3-5411-45fa-a30f-43bd04caf96b State: Peer Rejected (Connected)", "cat /var/lib/glusterd/glusterd.info UUID=b5ab2ec3-5411-45fa-a30f-43bd04caf96b operating-version=30703", "gluster system:: uuid get", "For example, gluster system:: uuid get UUID: 1d9677dc-6159-405e-9319-ad85ec030880", "touch /var/lib/glusterd/peers/1d9677dc-6159-405e-9319-ad85ec030880", "UUID=<uuid-of-other-node> state=3 hostname=<hostname>" ]
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/sect-replacing_hosts
Chapter 5. Using AMQ Streams Operators
Chapter 5. Using AMQ Streams Operators Use the AMQ Streams operators to manage your Kafka cluster, and Kafka topics and users. 5.1. Using the Cluster Operator The Cluster Operator is used to deploy a Kafka cluster and other Kafka components. The Cluster Operator is deployed using YAML installation files. For information on deploying the Cluster Operator, see Deploying the Cluster Operator in the Deploying and Upgrading AMQ Streams on OpenShift guide. For information on the deployment options available for Kafka, see Kafka Cluster configuration . Note On OpenShift, a Kafka Connect deployment can incorporate a Source2Image feature to provide a convenient way to add additional connectors. 5.1.1. Cluster Operator configuration The Cluster Operator can be configured through the following supported environment variables and through the logging configuration. STRIMZI_NAMESPACE A comma-separated list of namespaces that the operator should operate in. When not set, set to empty string, or to * the Cluster Operator will operate in all namespaces. The Cluster Operator deployment might use the OpenShift Downward API to set this automatically to the namespace the Cluster Operator is deployed in. See the example below: env: - name: STRIMZI_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace STRIMZI_FULL_RECONCILIATION_INTERVAL_MS Optional, default is 120000 ms. The interval between periodic reconciliations, in milliseconds. STRIMZI_OPERATION_TIMEOUT_MS Optional, default 300000 ms. The timeout for internal operations, in milliseconds. This value should be increased when using AMQ Streams on clusters where regular OpenShift operations take longer than usual (because of slow downloading of Docker images, for example). STRIMZI_KAFKA_IMAGES Required. This provides a mapping from Kafka version to the corresponding Docker image containing a Kafka broker of that version. The required syntax is whitespace or comma separated <version> = <image> pairs. For example 2.5.0=registry.redhat.io/amq7/amq-streams-kafka-25-rhel7:1.6.7, 2.6.0=registry.redhat.io/amq7/amq-streams-kafka-26-rhel7:1.6.7 . This is used when a Kafka.spec.kafka.version property is specified but not the Kafka.spec.kafka.image , as described in Section 2.1.18, "Container images" . STRIMZI_DEFAULT_KAFKA_INIT_IMAGE Optional, default registry.redhat.io/amq7/amq-streams-rhel7-operator:1.6.7 . The image name to use as default for the init container started before the broker for initial configuration work (that is, rack support), if no image is specified as the kafka-init-image in the Section 2.1.18, "Container images" . STRIMZI_KAFKA_CONNECT_IMAGES Required. This provides a mapping from the Kafka version to the corresponding Docker image containing a Kafka connect of that version. The required syntax is whitespace or comma separated <version> = <image> pairs. For example 2.5.0=registry.redhat.io/amq7/amq-streams-kafka-25-rhel7:1.6.7, 2.6.0=registry.redhat.io/amq7/amq-streams-kafka-26-rhel7:1.6.7 . This is used when a KafkaConnect.spec.version property is specified but not the KafkaConnect.spec.image , as described in Section B.1.6, " image " . STRIMZI_KAFKA_CONNECT_S2I_IMAGES Required. This provides a mapping from the Kafka version to the corresponding Docker image containing a Kafka connect of that version. The required syntax is whitespace or comma separated <version> = <image> pairs. For example 2.5.0=registry.redhat.io/amq7/amq-streams-kafka-25-rhel7:1.6.7, 2.6.0=registry.redhat.io/amq7/amq-streams-kafka-26-rhel7:1.6.7 . This is used when a KafkaConnectS2I.spec.version property is specified but not the KafkaConnectS2I.spec.image , as described in Section B.1.6, " image " . STRIMZI_KAFKA_MIRROR_MAKER_IMAGES Required. This provides a mapping from the Kafka version to the corresponding Docker image containing a Kafka mirror maker of that version. The required syntax is whitespace or comma separated <version> = <image> pairs. For example 2.5.0=registry.redhat.io/amq7/amq-streams-kafka-25-rhel7:1.6.7, 2.6.0=registry.redhat.io/amq7/amq-streams-kafka-26-rhel7:1.6.7 . This is used when a KafkaMirrorMaker.spec.version property is specified but not the KafkaMirrorMaker.spec.image , as described in Section B.1.6, " image " . STRIMZI_DEFAULT_TOPIC_OPERATOR_IMAGE Optional, default registry.redhat.io/amq7/amq-streams-rhel7-operator:1.6.7 . The image name to use as the default when deploying the topic operator, if no image is specified as the Kafka.spec.entityOperator.topicOperator.image in the Section 2.1.18, "Container images" of the Kafka resource. STRIMZI_DEFAULT_USER_OPERATOR_IMAGE Optional, default registry.redhat.io/amq7/amq-streams-rhel7-operator:1.6.7 . The image name to use as the default when deploying the user operator, if no image is specified as the Kafka.spec.entityOperator.userOperator.image in the Section 2.1.18, "Container images" of the Kafka resource. STRIMZI_DEFAULT_TLS_SIDECAR_ENTITY_OPERATOR_IMAGE Optional, default registry.redhat.io/amq7/amq-streams-kafka-26-rhel7:1.6.7 . The image name to use as the default when deploying the sidecar container which provides TLS support for the Entity Operator, if no image is specified as the Kafka.spec.entityOperator.tlsSidecar.image in the Section 2.1.18, "Container images" . STRIMZI_IMAGE_PULL_POLICY Optional. The ImagePullPolicy which will be applied to containers in all pods managed by AMQ Streams Cluster Operator. The valid values are Always , IfNotPresent , and Never . If not specified, the OpenShift defaults will be used. Changing the policy will result in a rolling update of all your Kafka, Kafka Connect, and Kafka MirrorMaker clusters. STRIMZI_IMAGE_PULL_SECRETS Optional. A comma-separated list of Secret names. The secrets referenced here contain the credentials to the container registries where the container images are pulled from. The secrets are used in the imagePullSecrets field for all Pods created by the Cluster Operator. Changing this list results in a rolling update of all your Kafka, Kafka Connect, and Kafka MirrorMaker clusters. STRIMZI_KUBERNETES_VERSION Optional. Overrides the OpenShift version information detected from the API server. See the example below: env: - name: STRIMZI_KUBERNETES_VERSION value: | major=1 minor=16 gitVersion=v1.16.2 gitCommit=c97fe5036ef3df2967d086711e6c0c405941e14b gitTreeState=clean buildDate=2019-10-15T19:09:08Z goVersion=go1.12.10 compiler=gc platform=linux/amd64 KUBERNETES_SERVICE_DNS_DOMAIN Optional. Overrides the default OpenShift DNS domain name suffix. By default, services assigned in the OpenShift cluster have a DNS domain name that uses the default suffix cluster.local . For example, for broker kafka-0 : <cluster-name> -kafka-0. <cluster-name> -kafka-brokers. <namespace> .svc. cluster.local The DNS domain name is added to the Kafka broker certificates used for hostname verification. If you are using a different DNS domain name suffix in your cluster, change the KUBERNETES_SERVICE_DNS_DOMAIN environment variable from the default to the one you are using in order to establish a connection with the Kafka brokers. Configuration by ConfigMap The Cluster Operator's logging is configured by the strimzi-cluster-operator ConfigMap . A ConfigMap containing logging configuration is created when installing the Cluster Operator. This ConfigMap is described in the file install/cluster-operator/050-ConfigMap-strimzi-cluster-operator.yaml . You configure Cluster Operator logging by changing the data field log4j2.properties in this ConfigMap . To update the logging configuration, you can edit the 050-ConfigMap-strimzi-cluster-operator.yaml file and then run the following command: oc apply -f install/cluster-operator/050-ConfigMap-strimzi-cluster-operator.yaml Alternatively, edit the ConfigMap directly: oc edit cm strimzi-cluster-operator To change the frequency of the reload interval, set a time in seconds in the monitorInterval option in the created ConfigMap . If the ConfigMap is missing when the Cluster Operator is deployed, the default logging values are used. If the ConfigMap is accidentally deleted after the Cluster Operator is deployed, the most recently loaded logging configuration is used. Create a new ConfigMap to load a new logging configuration. Note Do not remove the monitorInterval option from the ConfigMap. 5.1.1.1. Periodic reconciliation Although the Cluster Operator reacts to all notifications about the desired cluster resources received from the OpenShift cluster, if the operator is not running, or if a notification is not received for any reason, the desired resources will get out of sync with the state of the running OpenShift cluster. In order to handle failovers properly, a periodic reconciliation process is executed by the Cluster Operator so that it can compare the state of the desired resources with the current cluster deployments in order to have a consistent state across all of them. You can set the time interval for the periodic reconciliations using the [STRIMZI_FULL_RECONCILIATION_INTERVAL_MS] variable. 5.1.2. Provisioning Role-Based Access Control (RBAC) For the Cluster Operator to function it needs permission within the OpenShift cluster to interact with resources such as Kafka , KafkaConnect , and so on, as well as the managed resources, such as ConfigMaps , Pods , Deployments , StatefulSets and Services . Such permission is described in terms of OpenShift role-based access control (RBAC) resources: ServiceAccount , Role and ClusterRole , RoleBinding and ClusterRoleBinding . In addition to running under its own ServiceAccount with a ClusterRoleBinding , the Cluster Operator manages some RBAC resources for the components that need access to OpenShift resources. OpenShift also includes privilege escalation protections that prevent components operating under one ServiceAccount from granting other ServiceAccounts privileges that the granting ServiceAccount does not have. Because the Cluster Operator must be able to create the ClusterRoleBindings , and RoleBindings needed by resources it manages, the Cluster Operator must also have those same privileges. 5.1.2.1. Delegated privileges When the Cluster Operator deploys resources for a desired Kafka resource it also creates ServiceAccounts , RoleBindings , and ClusterRoleBindings , as follows: The Kafka broker pods use a ServiceAccount called cluster-name -kafka When the rack feature is used, the strimzi- cluster-name -kafka-init ClusterRoleBinding is used to grant this ServiceAccount access to the nodes within the cluster via a ClusterRole called strimzi-kafka-broker When the rack feature is not used no binding is created The ZooKeeper pods use a ServiceAccount called cluster-name -zookeeper The Entity Operator pod uses a ServiceAccount called cluster-name -entity-operator The Topic Operator produces OpenShift events with status information, so the ServiceAccount is bound to a ClusterRole called strimzi-entity-operator which grants this access via the strimzi-entity-operator RoleBinding The pods for KafkaConnect and KafkaConnectS2I resources use a ServiceAccount called cluster-name -cluster-connect The pods for KafkaMirrorMaker use a ServiceAccount called cluster-name -mirror-maker The pods for KafkaMirrorMaker2 use a ServiceAccount called cluster-name -mirrormaker2 The pods for KafkaBridge use a ServiceAccount called cluster-name -bridge 5.1.2.2. ServiceAccount The Cluster Operator is best run using a ServiceAccount : Example ServiceAccount for the Cluster Operator apiVersion: v1 kind: ServiceAccount metadata: name: strimzi-cluster-operator labels: app: strimzi The Deployment of the operator then needs to specify this in its spec.template.spec.serviceAccountName : Partial example of Deployment for the Cluster Operator apiVersion: apps/v1 kind: Deployment metadata: name: strimzi-cluster-operator labels: app: strimzi spec: replicas: 1 selector: matchLabels: name: strimzi-cluster-operator strimzi.io/kind: cluster-operator template: # ... Note line 12, where the strimzi-cluster-operator ServiceAccount is specified as the serviceAccountName . 5.1.2.3. ClusterRoles The Cluster Operator needs to operate using ClusterRoles that gives access to the necessary resources. Depending on the OpenShift cluster setup, a cluster administrator might be needed to create the ClusterRoles . Note Cluster administrator rights are only needed for the creation of the ClusterRoles . The Cluster Operator will not run under the cluster admin account. The ClusterRoles follow the principle of least privilege and contain only those privileges needed by the Cluster Operator to operate Kafka, Kafka Connect, and ZooKeeper clusters. The first set of assigned privileges allow the Cluster Operator to manage OpenShift resources such as StatefulSets , Deployments , Pods , and ConfigMaps . Cluster Operator uses ClusterRoles to grant permission at the namespace-scoped resources level and cluster-scoped resources level: ClusterRole with namespaced resources for the Cluster Operator apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: strimzi-cluster-operator-namespaced labels: app: strimzi rules: - apiGroups: - "" resources: # The cluster operator needs to access and manage service accounts to grant Strimzi components cluster permissions - serviceaccounts verbs: - get - create - delete - patch - update - apiGroups: - "rbac.authorization.k8s.io" resources: # The cluster operator needs to access and manage rolebindings to grant Strimzi components cluster permissions - rolebindings verbs: - get - create - delete - patch - update - apiGroups: - "" resources: # The cluster operator needs to access and manage config maps for Strimzi components configuration - configmaps # The cluster operator needs to access and manage services to expose Strimzi components to network traffic - services # The cluster operator needs to access and manage secrets to handle credentials - secrets # The cluster operator needs to access and manage persistent volume claims to bind them to Strimzi components for persistent data - persistentvolumeclaims verbs: - get - list - watch - create - delete - patch - update - apiGroups: - "kafka.strimzi.io" resources: # The cluster operator runs the KafkaAssemblyOperator, which needs to access and manage Kafka resources - kafkas - kafkas/status # The cluster operator runs the KafkaConnectAssemblyOperator, which needs to access and manage KafkaConnect resources - kafkaconnects - kafkaconnects/status # The cluster operator runs the KafkaConnectS2IAssemblyOperator, which needs to access and manage KafkaConnectS2I resources - kafkaconnects2is - kafkaconnects2is/status # The cluster operator runs the KafkaConnectorAssemblyOperator, which needs to access and manage KafkaConnector resources - kafkaconnectors - kafkaconnectors/status # The cluster operator runs the KafkaMirrorMakerAssemblyOperator, which needs to access and manage KafkaMirrorMaker resources - kafkamirrormakers - kafkamirrormakers/status # The cluster operator runs the KafkaBridgeAssemblyOperator, which needs to access and manage BridgeMaker resources - kafkabridges - kafkabridges/status # The cluster operator runs the KafkaMirrorMaker2AssemblyOperator, which needs to access and manage KafkaMirrorMaker2 resources - kafkamirrormaker2s - kafkamirrormaker2s/status # The cluster operator runs the KafkaRebalanceAssemblyOperator, which needs to access and manage KafkaRebalance resources - kafkarebalances - kafkarebalances/status verbs: - get - list - watch - create - delete - patch - update - apiGroups: - "" resources: # The cluster operator needs to access and delete pods, this is to allow it to monitor pod health and coordinate rolling updates - pods verbs: - get - list - watch - delete - apiGroups: - "" resources: - endpoints verbs: - get - list - watch - apiGroups: # The cluster operator needs the extensions api as the operator supports Kubernetes version 1.11+ # apps/v1 was introduced in Kubernetes 1.14 - "extensions" resources: # The cluster operator needs to access and manage deployments to run deployment based Strimzi components - deployments - deployments/scale # The cluster operator needs to access replica sets to manage Strimzi components and to determine error states - replicasets # The cluster operator needs to access and manage replication controllers to manage replicasets - replicationcontrollers # The cluster operator needs to access and manage network policies to lock down communication between Strimzi components - networkpolicies # The cluster operator needs to access and manage ingresses which allow external access to the services in a cluster - ingresses verbs: - get - list - watch - create - delete - patch - update - apiGroups: - "apps" resources: # The cluster operator needs to access and manage deployments to run deployment based Strimzi components - deployments - deployments/scale - deployments/status # The cluster operator needs to access and manage stateful sets to run stateful sets based Strimzi components - statefulsets # The cluster operator needs to access replica-sets to manage Strimzi components and to determine error states - replicasets verbs: - get - list - watch - create - delete - patch - update - apiGroups: - "" resources: # The cluster operator needs to be able to create events and delegate permissions to do so - events verbs: - create - apiGroups: # OpenShift S2I requirements - apps.openshift.io resources: - deploymentconfigs - deploymentconfigs/scale - deploymentconfigs/status - deploymentconfigs/finalizers verbs: - get - list - watch - create - delete - patch - update - apiGroups: # OpenShift S2I requirements - build.openshift.io resources: - buildconfigs - builds verbs: - create - delete - get - list - patch - watch - update - apiGroups: # OpenShift S2I requirements - image.openshift.io resources: - imagestreams - imagestreams/status verbs: - create - delete - get - list - watch - patch - update - apiGroups: - networking.k8s.io resources: # The cluster operator needs to access and manage network policies to lock down communication between Strimzi components - networkpolicies verbs: - get - list - watch - create - delete - patch - update - apiGroups: - route.openshift.io resources: # The cluster operator needs to access and manage routes to expose Strimzi components for external access - routes - routes/custom-host verbs: - get - list - create - delete - patch - update - apiGroups: - policy resources: # The cluster operator needs to access and manage pod disruption budgets this limits the number of concurrent disruptions # that a Strimzi component experiences, allowing for higher availability - poddisruptionbudgets verbs: - get - list - watch - create - delete - patch - update The second includes the permissions needed for cluster-scoped resources. ClusterRole with cluster-scoped resources for the Cluster Operator apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: strimzi-cluster-operator-global labels: app: strimzi rules: - apiGroups: - "rbac.authorization.k8s.io" resources: # The cluster operator needs to create and manage cluster role bindings in the case of an install where a user # has specified they want their cluster role bindings generated - clusterrolebindings verbs: - get - create - delete - patch - update - watch - apiGroups: - storage.k8s.io resources: # The cluster operator requires "get" permissions to view storage class details # This is because only a persistent volume of a supported storage class type can be resized - storageclasses verbs: - get - apiGroups: - "" resources: # The cluster operator requires "list" permissions to view all nodes in a cluster # The listing is used to determine the node addresses when NodePort access is configured # These addresses are then exposed in the custom resource states - nodes verbs: - list The strimzi-kafka-broker ClusterRole represents the access needed by the init container in Kafka pods that is used for the rack feature. As described in the Delegated privileges section, this role is also needed by the Cluster Operator in order to be able to delegate this access. ClusterRole for the Cluster Operator allowing it to delegate access to OpenShift nodes to the Kafka broker pods apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: strimzi-kafka-broker labels: app: strimzi rules: - apiGroups: - "" resources: # The Kafka Brokers require "get" permissions to view the node they are on # This information is used to generate a Rack ID that is used for High Availability configurations - nodes verbs: - get The strimzi-topic-operator ClusterRole represents the access needed by the Topic Operator. As described in the Delegated privileges section, this role is also needed by the Cluster Operator in order to be able to delegate this access. ClusterRole for the Cluster Operator allowing it to delegate access to events to the Topic Operator apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: strimzi-entity-operator labels: app: strimzi rules: - apiGroups: - "kafka.strimzi.io" resources: # The entity operator runs the KafkaTopic assembly operator, which needs to access and manage KafkaTopic resources - kafkatopics - kafkatopics/status # The entity operator runs the KafkaUser assembly operator, which needs to access and manage KafkaUser resources - kafkausers - kafkausers/status verbs: - get - list - watch - create - patch - update - delete - apiGroups: - "" resources: - events verbs: # The entity operator needs to be able to create events - create - apiGroups: - "" resources: # The entity operator user-operator needs to access and manage secrets to store generated credentials - secrets verbs: - get - list - create - patch - update - delete The strimzi-kafka-client ClusterRole represents the access needed by the components based on Kafka clients which use the client rack-awareness. As described in the Delegated privileges section, this role is also needed by the Cluster Operator in order to be able to delegate this access. ClusterRole for the Cluster Operator allowing it to delegate access to OpenShift nodes to the Kafka client based pods apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: strimzi-kafka-client labels: app: strimzi rules: - apiGroups: - "" resources: # The Kafka clients (Connect, Mirror Maker, etc.) require "get" permissions to view the node they are on # This information is used to generate a Rack ID (client.rack option) that is used for consuming from the closest # replicas when enabled - nodes verbs: - get 5.1.2.4. ClusterRoleBindings The operator needs ClusterRoleBindings and RoleBindings which associates its ClusterRole with its ServiceAccount : ClusterRoleBindings are needed for ClusterRoles containing cluster-scoped resources. Example ClusterRoleBinding for the Cluster Operator apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: strimzi-cluster-operator labels: app: strimzi subjects: - kind: ServiceAccount name: strimzi-cluster-operator namespace: myproject roleRef: kind: ClusterRole name: strimzi-cluster-operator-global apiGroup: rbac.authorization.k8s.io ClusterRoleBindings are also needed for the ClusterRoles needed for delegation: Example ClusterRoleBinding for the Cluster Operator for the Kafka broker rack-awarness apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: strimzi-cluster-operator-kafka-broker-delegation labels: app: strimzi # The Kafka broker cluster role must be bound to the cluster operator service account so that it can delegate the cluster role to the Kafka brokers. # This must be done to avoid escalating privileges which would be blocked by Kubernetes. subjects: - kind: ServiceAccount name: strimzi-cluster-operator namespace: myproject roleRef: kind: ClusterRole name: strimzi-kafka-broker apiGroup: rbac.authorization.k8s.io and Example ClusterRoleBinding for the Cluster Operator for the Kafka client rack-awarness apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: strimzi-cluster-operator-kafka-client-delegation labels: app: strimzi # The Kafka clients cluster role must be bound to the cluster operator service account so that it can delegate the # cluster role to the Kafka clients using it for consuming from closest replica. # This must be done to avoid escalating privileges which would be blocked by Kubernetes. subjects: - kind: ServiceAccount name: strimzi-cluster-operator namespace: myproject roleRef: kind: ClusterRole name: strimzi-kafka-client apiGroup: rbac.authorization.k8s.io ClusterRoles containing only namespaced resources are bound using RoleBindings only. apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: strimzi-cluster-operator labels: app: strimzi subjects: - kind: ServiceAccount name: strimzi-cluster-operator namespace: myproject roleRef: kind: ClusterRole name: strimzi-cluster-operator-namespaced apiGroup: rbac.authorization.k8s.io apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: strimzi-cluster-operator-entity-operator-delegation labels: app: strimzi # The Entity Operator cluster role must be bound to the cluster operator service account so that it can delegate the cluster role to the Entity Operator. # This must be done to avoid escalating privileges which would be blocked by Kubernetes. subjects: - kind: ServiceAccount name: strimzi-cluster-operator namespace: myproject roleRef: kind: ClusterRole name: strimzi-entity-operator apiGroup: rbac.authorization.k8s.io 5.2. Using the Topic Operator When you create, modify or delete a topic using the KafkaTopic resource, the Topic Operator ensures those changes are reflected in the Kafka cluster. The Deploying and Upgrading AMQ Streams on OpenShift guide provides instructions to deploy the Topic Operator: Using the Cluster Operator (recommended) Standalone to operate with Kafka clusters not managed by AMQ Streams 5.2.1. Kafka topic resource The KafkaTopic resource is used to configure topics, including the number of partitions and replicas. The full schema for KafkaTopic is described in KafkaTopic schema reference . 5.2.1.1. Identifying a Kafka cluster for topic handling A KafkaTopic resource includes a label that defines the appropriate name of the Kafka cluster (derived from the name of the Kafka resource) to which it belongs. For example: apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaTopic metadata: name: topic-name-1 labels: strimzi.io/cluster: my-cluster The label is used by the Topic Operator to identify the KafkaTopic resource and create a new topic, and also in subsequent handling of the topic. If the label does not match the Kafka cluster, the Topic Operator cannot identify the KafkaTopic and the topic is not created. 5.2.1.2. Handling changes to topics A fundamental problem that the Topic Operator has to solve is that there is no single source of truth: Both the KafkaTopic resource and the Kafka topic can be modified independently of the operator. Complicating this, the Topic Operator might not always be able to observe changes at each end in real time (for example, the operator might be down). To resolve this, the operator maintains its own private copy of the information about each topic. When a change happens either in the Kafka cluster, or in OpenShift, it looks at both the state of the other system and at its private copy in order to determine what needs to change to keep everything in sync. The same thing happens whenever the operator starts, and periodically while it is running. For example, suppose the Topic Operator is not running, and a KafkaTopic my-topic gets created. When the operator starts it will lack a private copy of "my-topic", so it can infer that the KafkaTopic has been created since it was last running. The operator will create the topic corresponding to my-topic , and also store a private copy of the metadata for my-topic . The private copy allows the operator to cope with scenarios where the topic configuration gets changed both in Kafka and in OpenShift, so long as the changes are not incompatible (for example, both changing the same topic config key, but to different values). In the case of incompatible changes, the Kafka configuration wins, and the KafkaTopic will be updated to reflect that. The private copy is held in the same ZooKeeper ensemble used by Kafka itself. This mitigates availability concerns, because if ZooKeeper is not running then Kafka itself cannot run, so the operator will be no less available than it would even if it was stateless. 5.2.1.3. Kafka topic usage recommendations When working with topics, be consistent. Always operate on either KafkaTopic resources or topics directly in OpenShift. Avoid routinely switching between both methods for a given topic. Use topic names that reflect the nature of the topic, and remember that names cannot be changed later. If creating a topic in Kafka, use a name that is a valid OpenShift resource name, otherwise the Topic Operator will need to create the corresponding KafkaTopic with a name that conforms to the OpenShift rules. Note Recommendations for identifiers and names in OpenShift are outlined in Identifiers and Names in OpenShift community article. 5.2.1.4. Kafka topic naming conventions Kafka and OpenShift impose their own validation rules for the naming of topics in Kafka and KafkaTopic.metadata.name respectively. There are valid names for each which are invalid in the other. Using the spec.topicName property, it is possible to create a valid topic in Kafka with a name that would be invalid for the Kafka topic in OpenShift. The spec.topicName property inherits Kafka naming validation rules: The name must not be longer than 249 characters. Valid characters for Kafka topics are ASCII alphanumerics, . , _ , and - . The name cannot be . or .. , though . can be used in a name, such as exampleTopic. or .exampleTopic . spec.topicName must not be changed. For example: apiVersion: {KafkaApiVersion} kind: KafkaTopic metadata: name: topic-name-1 spec: topicName: topicName-1 1 # ... 1 Upper case is invalid in OpenShift. cannot be changed to: apiVersion: {KafkaApiVersion} kind: KafkaTopic metadata: name: topic-name-1 spec: topicName: name-2 # ... Note Some Kafka client applications, such as Kafka Streams, can create topics in Kafka programmatically. If those topics have names that are invalid OpenShift resource names, the Topic Operator gives them valid names based on the Kafka names. Invalid characters are replaced and a hash is appended to the name. 5.2.2. Configuring a Kafka topic Use the properties of the KafkaTopic resource to configure a Kafka topic. You can use oc apply to create or modify topics, and oc delete to delete existing topics. For example: oc apply -f <topic-config-file> oc delete KafkaTopic <topic-name> This procedure shows how to create a topic with 10 partitions and 2 replicas. Before you start It is important that you consider the following before making your changes: Kafka does not support making the following changes through the KafkaTopic resource: Changing topic names using spec.topicName Decreasing partition size using spec.partitions You cannot use spec.replicas to change the number of replicas that were initially specified. Increasing spec.partitions for topics with keys will change how records are partitioned, which can be particularly problematic when the topic uses semantic partitioning . Prerequisites A running Kafka cluster configured with a Kafka broker listener using TLS authentication and encryption . A running Topic Operator (typically deployed with the Entity Operator ). For deleting a topic, delete.topic.enable=true (default) in the spec.kafka.config of the Kafka resource. Procedure Prepare a file containing the KafkaTopic to be created. An example KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaTopic metadata: name: orders labels: strimzi.io/cluster: my-cluster spec: partitions: 10 replicas: 2 Tip When modifying a topic, you can get the current version of the resource using oc get kafkatopic orders -o yaml . Create the KafkaTopic resource in OpenShift. oc apply -f TOPIC-CONFIG-FILE 5.2.3. Configuring the Topic Operator with resource requests and limits You can allocate resources, such as CPU and memory, to the Topic Operator and set a limit on the amount of resources it can consume. Prerequisites The Cluster Operator is running. Procedure Update the Kafka cluster configuration in an editor, as required: oc edit kafka MY-CLUSTER In the spec.entityOperator.topicOperator.resources property in the Kafka resource, set the resource requests and limits for the Topic Operator. apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka spec: # Kafka and ZooKeeper sections... entityOperator: topicOperator: resources: requests: cpu: "1" memory: 500Mi limits: cpu: "1" memory: 500Mi Apply the new configuration to create or update the resource. oc apply -f KAFKA-CONFIG-FILE 5.3. Using the User Operator When you create, modify or delete a user using the KafkaUser resource, the User Operator ensures those changes are reflected in the Kafka cluster. The Deploying and Upgrading AMQ Streams on OpenShift guide provides instructions to deploy the User Operator: Using the Cluster Operator (recommended) Standalone to operate with Kafka clusters not managed by AMQ Streams For more information about the schema, see KafkaUser schema reference . Authenticating and authorizing access to Kafka Use KafkaUser to enable the authentication and authorization mechanisms that a specific client uses to access Kafka. For more information on using KafkUser to manage users and secure access to Kafka brokers, see Securing access to Kafka brokers . 5.3.1. Configuring the User Operator with resource requests and limits You can allocate resources, such as CPU and memory, to the User Operator and set a limit on the amount of resources it can consume. Prerequisites The Cluster Operator is running. Procedure Update the Kafka cluster configuration in an editor, as required: oc edit kafka MY-CLUSTER In the spec.entityOperator.userOperator.resources property in the Kafka resource, set the resource requests and limits for the User Operator. apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka spec: # Kafka and ZooKeeper sections... entityOperator: userOperator: resources: requests: cpu: "1" memory: 500Mi limits: cpu: "1" memory: 500Mi Save the file and exit the editor. The Cluster Operator applies the changes automatically. 5.4. Monitoring operators using Prometheus metrics AMQ Streams operators expose Prometheus metrics. The metrics are automatically enabled and contain information about: Number of reconciliations Number of Custom Resources the operator is processing Duration of reconciliations JVM metrics from the operators Additionally, we provide an example Grafana dashboard. For more information about Prometheus, see the Introducing Metrics to Kafka in the Deploying and Upgrading AMQ Streams on OpenShift guide.
[ "env: - name: STRIMZI_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace", "env: - name: STRIMZI_KUBERNETES_VERSION value: | major=1 minor=16 gitVersion=v1.16.2 gitCommit=c97fe5036ef3df2967d086711e6c0c405941e14b gitTreeState=clean buildDate=2019-10-15T19:09:08Z goVersion=go1.12.10 compiler=gc platform=linux/amd64", "<cluster-name> -kafka-0. <cluster-name> -kafka-brokers. <namespace> .svc. cluster.local", "apply -f install/cluster-operator/050-ConfigMap-strimzi-cluster-operator.yaml", "edit cm strimzi-cluster-operator", "apiVersion: v1 kind: ServiceAccount metadata: name: strimzi-cluster-operator labels: app: strimzi", "apiVersion: apps/v1 kind: Deployment metadata: name: strimzi-cluster-operator labels: app: strimzi spec: replicas: 1 selector: matchLabels: name: strimzi-cluster-operator strimzi.io/kind: cluster-operator template: #", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: strimzi-cluster-operator-namespaced labels: app: strimzi rules: - apiGroups: - \"\" resources: # The cluster operator needs to access and manage service accounts to grant Strimzi components cluster permissions - serviceaccounts verbs: - get - create - delete - patch - update - apiGroups: - \"rbac.authorization.k8s.io\" resources: # The cluster operator needs to access and manage rolebindings to grant Strimzi components cluster permissions - rolebindings verbs: - get - create - delete - patch - update - apiGroups: - \"\" resources: # The cluster operator needs to access and manage config maps for Strimzi components configuration - configmaps # The cluster operator needs to access and manage services to expose Strimzi components to network traffic - services # The cluster operator needs to access and manage secrets to handle credentials - secrets # The cluster operator needs to access and manage persistent volume claims to bind them to Strimzi components for persistent data - persistentvolumeclaims verbs: - get - list - watch - create - delete - patch - update - apiGroups: - \"kafka.strimzi.io\" resources: # The cluster operator runs the KafkaAssemblyOperator, which needs to access and manage Kafka resources - kafkas - kafkas/status # The cluster operator runs the KafkaConnectAssemblyOperator, which needs to access and manage KafkaConnect resources - kafkaconnects - kafkaconnects/status # The cluster operator runs the KafkaConnectS2IAssemblyOperator, which needs to access and manage KafkaConnectS2I resources - kafkaconnects2is - kafkaconnects2is/status # The cluster operator runs the KafkaConnectorAssemblyOperator, which needs to access and manage KafkaConnector resources - kafkaconnectors - kafkaconnectors/status # The cluster operator runs the KafkaMirrorMakerAssemblyOperator, which needs to access and manage KafkaMirrorMaker resources - kafkamirrormakers - kafkamirrormakers/status # The cluster operator runs the KafkaBridgeAssemblyOperator, which needs to access and manage BridgeMaker resources - kafkabridges - kafkabridges/status # The cluster operator runs the KafkaMirrorMaker2AssemblyOperator, which needs to access and manage KafkaMirrorMaker2 resources - kafkamirrormaker2s - kafkamirrormaker2s/status # The cluster operator runs the KafkaRebalanceAssemblyOperator, which needs to access and manage KafkaRebalance resources - kafkarebalances - kafkarebalances/status verbs: - get - list - watch - create - delete - patch - update - apiGroups: - \"\" resources: # The cluster operator needs to access and delete pods, this is to allow it to monitor pod health and coordinate rolling updates - pods verbs: - get - list - watch - delete - apiGroups: - \"\" resources: - endpoints verbs: - get - list - watch - apiGroups: # The cluster operator needs the extensions api as the operator supports Kubernetes version 1.11+ # apps/v1 was introduced in Kubernetes 1.14 - \"extensions\" resources: # The cluster operator needs to access and manage deployments to run deployment based Strimzi components - deployments - deployments/scale # The cluster operator needs to access replica sets to manage Strimzi components and to determine error states - replicasets # The cluster operator needs to access and manage replication controllers to manage replicasets - replicationcontrollers # The cluster operator needs to access and manage network policies to lock down communication between Strimzi components - networkpolicies # The cluster operator needs to access and manage ingresses which allow external access to the services in a cluster - ingresses verbs: - get - list - watch - create - delete - patch - update - apiGroups: - \"apps\" resources: # The cluster operator needs to access and manage deployments to run deployment based Strimzi components - deployments - deployments/scale - deployments/status # The cluster operator needs to access and manage stateful sets to run stateful sets based Strimzi components - statefulsets # The cluster operator needs to access replica-sets to manage Strimzi components and to determine error states - replicasets verbs: - get - list - watch - create - delete - patch - update - apiGroups: - \"\" resources: # The cluster operator needs to be able to create events and delegate permissions to do so - events verbs: - create - apiGroups: # OpenShift S2I requirements - apps.openshift.io resources: - deploymentconfigs - deploymentconfigs/scale - deploymentconfigs/status - deploymentconfigs/finalizers verbs: - get - list - watch - create - delete - patch - update - apiGroups: # OpenShift S2I requirements - build.openshift.io resources: - buildconfigs - builds verbs: - create - delete - get - list - patch - watch - update - apiGroups: # OpenShift S2I requirements - image.openshift.io resources: - imagestreams - imagestreams/status verbs: - create - delete - get - list - watch - patch - update - apiGroups: - networking.k8s.io resources: # The cluster operator needs to access and manage network policies to lock down communication between Strimzi components - networkpolicies verbs: - get - list - watch - create - delete - patch - update - apiGroups: - route.openshift.io resources: # The cluster operator needs to access and manage routes to expose Strimzi components for external access - routes - routes/custom-host verbs: - get - list - create - delete - patch - update - apiGroups: - policy resources: # The cluster operator needs to access and manage pod disruption budgets this limits the number of concurrent disruptions # that a Strimzi component experiences, allowing for higher availability - poddisruptionbudgets verbs: - get - list - watch - create - delete - patch - update", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: strimzi-cluster-operator-global labels: app: strimzi rules: - apiGroups: - \"rbac.authorization.k8s.io\" resources: # The cluster operator needs to create and manage cluster role bindings in the case of an install where a user # has specified they want their cluster role bindings generated - clusterrolebindings verbs: - get - create - delete - patch - update - watch - apiGroups: - storage.k8s.io resources: # The cluster operator requires \"get\" permissions to view storage class details # This is because only a persistent volume of a supported storage class type can be resized - storageclasses verbs: - get - apiGroups: - \"\" resources: # The cluster operator requires \"list\" permissions to view all nodes in a cluster # The listing is used to determine the node addresses when NodePort access is configured # These addresses are then exposed in the custom resource states - nodes verbs: - list", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: strimzi-kafka-broker labels: app: strimzi rules: - apiGroups: - \"\" resources: # The Kafka Brokers require \"get\" permissions to view the node they are on # This information is used to generate a Rack ID that is used for High Availability configurations - nodes verbs: - get", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: strimzi-entity-operator labels: app: strimzi rules: - apiGroups: - \"kafka.strimzi.io\" resources: # The entity operator runs the KafkaTopic assembly operator, which needs to access and manage KafkaTopic resources - kafkatopics - kafkatopics/status # The entity operator runs the KafkaUser assembly operator, which needs to access and manage KafkaUser resources - kafkausers - kafkausers/status verbs: - get - list - watch - create - patch - update - delete - apiGroups: - \"\" resources: - events verbs: # The entity operator needs to be able to create events - create - apiGroups: - \"\" resources: # The entity operator user-operator needs to access and manage secrets to store generated credentials - secrets verbs: - get - list - create - patch - update - delete", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: strimzi-kafka-client labels: app: strimzi rules: - apiGroups: - \"\" resources: # The Kafka clients (Connect, Mirror Maker, etc.) require \"get\" permissions to view the node they are on # This information is used to generate a Rack ID (client.rack option) that is used for consuming from the closest # replicas when enabled - nodes verbs: - get", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: strimzi-cluster-operator labels: app: strimzi subjects: - kind: ServiceAccount name: strimzi-cluster-operator namespace: myproject roleRef: kind: ClusterRole name: strimzi-cluster-operator-global apiGroup: rbac.authorization.k8s.io", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: strimzi-cluster-operator-kafka-broker-delegation labels: app: strimzi The Kafka broker cluster role must be bound to the cluster operator service account so that it can delegate the cluster role to the Kafka brokers. This must be done to avoid escalating privileges which would be blocked by Kubernetes. subjects: - kind: ServiceAccount name: strimzi-cluster-operator namespace: myproject roleRef: kind: ClusterRole name: strimzi-kafka-broker apiGroup: rbac.authorization.k8s.io", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: strimzi-cluster-operator-kafka-client-delegation labels: app: strimzi The Kafka clients cluster role must be bound to the cluster operator service account so that it can delegate the cluster role to the Kafka clients using it for consuming from closest replica. This must be done to avoid escalating privileges which would be blocked by Kubernetes. subjects: - kind: ServiceAccount name: strimzi-cluster-operator namespace: myproject roleRef: kind: ClusterRole name: strimzi-kafka-client apiGroup: rbac.authorization.k8s.io", "apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: strimzi-cluster-operator labels: app: strimzi subjects: - kind: ServiceAccount name: strimzi-cluster-operator namespace: myproject roleRef: kind: ClusterRole name: strimzi-cluster-operator-namespaced apiGroup: rbac.authorization.k8s.io", "apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: strimzi-cluster-operator-entity-operator-delegation labels: app: strimzi The Entity Operator cluster role must be bound to the cluster operator service account so that it can delegate the cluster role to the Entity Operator. This must be done to avoid escalating privileges which would be blocked by Kubernetes. subjects: - kind: ServiceAccount name: strimzi-cluster-operator namespace: myproject roleRef: kind: ClusterRole name: strimzi-entity-operator apiGroup: rbac.authorization.k8s.io", "apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaTopic metadata: name: topic-name-1 labels: strimzi.io/cluster: my-cluster", "apiVersion: {KafkaApiVersion} kind: KafkaTopic metadata: name: topic-name-1 spec: topicName: topicName-1 1 #", "apiVersion: {KafkaApiVersion} kind: KafkaTopic metadata: name: topic-name-1 spec: topicName: name-2 #", "apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaTopic metadata: name: orders labels: strimzi.io/cluster: my-cluster spec: partitions: 10 replicas: 2", "apply -f TOPIC-CONFIG-FILE", "edit kafka MY-CLUSTER", "apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka spec: # Kafka and ZooKeeper sections entityOperator: topicOperator: resources: requests: cpu: \"1\" memory: 500Mi limits: cpu: \"1\" memory: 500Mi", "apply -f KAFKA-CONFIG-FILE", "edit kafka MY-CLUSTER", "apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka spec: # Kafka and ZooKeeper sections entityOperator: userOperator: resources: requests: cpu: \"1\" memory: 500Mi limits: cpu: \"1\" memory: 500Mi" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_amq_streams_on_openshift/assembly-operators-str
Chapter 10. Application credentials
Chapter 10. Application credentials Use Application Credentials to avoid the practice of embedding user account credentials in configuration files. Instead, the user creates an Application Credential that receives delegated access to a single project and has its own distinct secret. The user can also limit the delegated privileges to a single role in that project. This allows you to adopt the principle of least privilege, where the authenticated service gains access only to the one project and role that it needs to function, rather than all projects and roles. With Application Credentials, you can consume an API without revealing your user credentials, and applications can authenticate to Keystone without requiring embedded user credentials. You can use Application Credentials to generate tokens and configure keystone_authtoken settings for applications. These use cases are described in the following sections. Note The Application Credential is dependent on the user account that created it, so it will terminate if that account is ever deleted, or loses access to the relevant role. 10.1. Using Application Credentials to generate tokens Application Credentials are available to users as a self-service function in the dashboard. This example demonstrates how a user can create an Application Credential and then use it to generate a token. Create a test project, and test user accounts: Create a project called AppCreds : Create a user called AppCredsUser : Grant AppCredsUser access to the member role for the AppCreds project: Log in to the dashboard as AppCredsUser and create an Application Credential: Overview Identity Application Credentials +Create Application Credential . Note Ensure that you download the clouds.yaml file contents, because you cannot access it again after you close the pop-up window titled Your Application Credential . Create a file named /home/stack/.config/openstack/clouds.yaml using the CLI and paste the contents of the clouds.yaml file. Note These values will be different for your deployment. Use the Application Credential to generate a token. You must not be sourced as any specific user when using the following command, and you must be in the same directory as your clouds.yaml file. Note If you receive an error similar to __init__() got an unexpected keyword argument 'application_credential_secret' , then you might still be sourced to the credentials. For a fresh environment, run sudo su - stack . 10.2. Integrating Application Credentials with applications Application Credentials can be used to authenticate applications to keystone. When you use Application Credentials, the keystone_authtoken settings use v3applicationcredential as the authentication type and contain the credentials that you receive during the credential creation process. Enter the following values: application_credential_secret : The Application Credential secret. application_credential_id : The Application Credential id. (Optional) application_credential_name : You might use this parameter if you use a named application credential, rather than an ID. For example: 10.3. Managing Application Credentials You can use the command line to create and delete Application Credentials. The create subcommand creates an application credential based on the currently sourced account. For example, creating the credential when sourced as an admin user will grant the same roles to the Application Credential: Warning Using the --unrestricted parameter enables the application credential to create and delete other application credentials and trusts. This is potentially dangerous behavior and is disabled by default. You cannot use the --unrestricted parameter in combination with other access rules. By default, the resulting role membership includes all the roles assigned to the account that created the credentials. You can limit the role membership by delegating access only to a specific role: To delete an Application Credential: 10.4. Replacing Application Credentials Application credentials are bound to the user account that created them and become invalid if the user account is ever deleted, or if the user loses access to the delegated role. As a result, you should be prepared to generate a new application credential as needed. Replacing existing application credentials for configuration files Update the application credentials assigned to an application (using a configuration file): Create a new set of application credentials. Add the new credentials to the application configuration file, replacing the existing credentials. For more information, see Integrating Application Credentials with applications . Restart the application service to apply the change. Delete the old application credential, if appropriate. For more information about the command line options, see Managing Application Credentials . Replacing the existing application credentials in clouds.yaml When you replace an application credential used by clouds.yaml , you must create the replacement credentials using OpenStack user credentials. By default, you cannot use application credentials to create another set of application credentials. The openstack application credential create command creates an application credential based on the currently sourced account. Authenticate as the OpenStack user that originally created the authentication credentials that are about to expire. For example, if you used the procedure Using Application Credentials to generate tokens , you must log in again as AppCredsUser . Create an Application Credential called AppCred2 . This can be done using the OpenStack Dashboard, or the openstack CLI interface: Copy the id and secret parameters from the output of the command. The secret parameter value cannot be accessed again. Replace the application_credential_id and application_credential_secret parameter values in the USD{HOME}/.config/openstack/clouds.yaml file with the secret and id values that you copied. Verification Generate a token with clouds.yaml to confirm that the credentials are working as expected. You must not be sourced as any specific user when using the following command, and you must be in the same directory as your clouds.yaml file: Example output:
[ "openstack project create AppCreds", "openstack user create --project AppCreds --password-prompt AppCredsUser", "openstack role add --user AppCredsUser --project AppCreds member", "This is a clouds.yaml file, which can be used by OpenStack tools as a source of configuration on how to connect to a cloud. If this is your only cloud, just put this file in ~/.config/openstack/clouds.yaml and tools like python-openstackclient will just work with no further config. (You will need to add your password to the auth section) If you have more than one cloud account, add the cloud entry to the clouds section of your existing file and you can refer to them by name with OS_CLOUD=openstack or --os-cloud=openstack clouds: openstack: auth: auth_url: http://10.0.0.10:5000/v3 application_credential_id: \"6d141f23732b498e99db8186136c611b\" application_credential_secret: \"<example secret value>\" region_name: \"regionOne\" interface: \"public\" identity_api_version: 3 auth_type: \"v3applicationcredential\"", "openstack --os-cloud=openstack token issue +------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | expires | 2018-08-29T05:37:29+0000 | | id | gAAAAABbhiMJ4TxxFlTMdsYJpfStsGotPrns0lnpvJq9ILtdi-NKqisWBeNiJlUXwmnoGQDh2CMyK9OeTsuEXnJNmFfKjxiHWmcQVYzAhMKo6_QMUtu_Qm6mtpzYYHBrUGboa_Ay0LBuFDtsjtgtvJ-r8G3TsJMowbKF-yo--O_XLhERU_QQVl3hl8zmMRdmLh_P9Cbhuolt | | project_id | 1a74eabbf05c41baadd716179bb9e1da | | user_id | ef679eeddfd14f8b86becfd7e1dc84f2 | +------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+", "[keystone_authtoken] auth_url = http://10.0.0.10:5000/v3 auth_type = v3applicationcredential application_credential_id = \"6cb5fa6a13184e6fab65ba2108adf50c\" application_credential_secret = \"<example password>\"", "openstack application credential create --description \"App Creds - All roles\" AppCredsUser +--------------+----------------------------------------------------------------------------------------+ | Field | Value | +--------------+----------------------------------------------------------------------------------------+ | description | App Creds - All roles | | expires_at | None | | id | fc17651c2c114fd6813f86fdbb430053 | | name | AppCredsUser | | project_id | 507663d0cfe244f8bc0694e6ed54d886 | | roles | member reader admin | | secret | fVnqa6I_XeRDDkmQnB5lx361W1jHtOtw3ci_mf_tOID-09MrPAzkU7mv-by8ykEhEa1QLPFJLNV4cS2Roo9lOg | | unrestricted | False | +--------------+----------------------------------------------------------------------------------------+", "openstack application credential create --description \"App Creds - Member\" --role member AppCredsUser +--------------+----------------------------------------------------------------------------------------+ | Field | Value | +--------------+----------------------------------------------------------------------------------------+ | description | App Creds - Member | | expires_at | None | | id | e21e7f4b578240f79814085a169c9a44 | | name | AppCredsUser | | project_id | 507663d0cfe244f8bc0694e6ed54d886 | | roles | member | | secret | XCLVUTYIreFhpMqLVB5XXovs_z9JdoZWpdwrkaG1qi5GQcmBMUFG7cN2htzMlFe5T5mdPsnf5JMNbu0Ih-4aCg | | unrestricted | False | +--------------+----------------------------------------------------------------------------------------+", "openstack application credential delete AppCredsUser", "openstack application credential create --description \"App Creds 2 - Member\" --role member AppCred2", "openstack --os-cloud=openstack token issue", "+------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | expires | 2018-08-29T05:37:29+0000 | | id | gAAAAABbhiMJ4TxxFlTMdsYJpfStsGotPrns0lnpvJq9ILtdi-NKqisWBeNiJlUXwmnoGQDh2CMyK9OeTsuEXnJNmFfKjxiHWmcQVYzAhMKo6_QMUtu_Qm6mtpzYYHBrUGboa_Ay0LBuFDtsjtgtvJ-r8G3TsJMowbKF-yo--O_XLhERU_QQVl3hl8zmMRdmLh_P9Cbhuolt | | project_id | 1a74eabbf05c41baadd716179bb9e1da | | user_id | ef679eeddfd14f8b86becfd7e1dc84f2 | +------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/performing_security_operations/assembly-application-credentials_performing-security-services
18.3. Testing the camel-jbossdatagrid-fuse Quickstart
18.3. Testing the camel-jbossdatagrid-fuse Quickstart To test the local_cache_producer create a CSV file in the incomingFolderPath , previously specified. The following command will generate a file with a single entry: Once the file has been removed from the directory then the producer has successfully parsed the file. Proceed to testing the consumer. To test the local_cache_consumer navigate to http://127.0.0.1:8282/cache/get/1 in a web browser. This will query the cache for the entry with an Id of 1 , which was specified above. The following JSON of the created POJO should be returned: Report a bug
[ "echo \"1,Bill,Gates,59\" > USDincomingFolderPath/sample.csv", "{\"id\":1,\"firstName\":\"Bill\",\"lastName\":\"Gates\",\"age\":59}" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/getting_started_guide/camel-jboss_data_grid_quickstart_testing
Chapter 10. Migrating your applications
Chapter 10. Migrating your applications You can migrate your applications by using the Migration Toolkit for Containers (MTC) web console or from the command line . You can use stage migration and cutover migration to migrate an application between clusters: Stage migration copies data from the source cluster to the target cluster without stopping the application. You can run a stage migration multiple times to reduce the duration of the cutover migration. Cutover migration stops the transactions on the source cluster and moves the resources to the target cluster. You can use state migration to migrate an application's state: State migration copies selected persistent volume claims (PVCs). You can use state migration to migrate a namespace within the same cluster. Most cluster-scoped resources are not yet handled by MTC. If your applications require cluster-scoped resources, you might have to create them manually on the target cluster. During migration, MTC preserves the following namespace annotations: openshift.io/sa.scc.mcs openshift.io/sa.scc.supplemental-groups openshift.io/sa.scc.uid-range These annotations preserve the UID range, ensuring that the containers retain their file system permissions on the target cluster. There is a risk that the migrated UIDs could duplicate UIDs within an existing or future namespace on the target cluster. 10.1. Migration prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. Direct image migration You must ensure that the secure OpenShift image registry of the source cluster is exposed. You must create a route to the exposed registry. Direct volume migration If your clusters use proxies, you must configure an Stunnel TCP proxy. Internal images If your application uses internal images from the openshift namespace, you must ensure that the required versions of the images are present on the target cluster. You can manually update an image stream tag in order to use a deprecated OpenShift Container Platform 3 image on an OpenShift Container Platform 4.12 cluster. Clusters The source cluster must be upgraded to the latest MTC z-stream release. The MTC version must be the same on all clusters. Network The clusters have unrestricted network access to each other and to the replication repository. If you copy the persistent volumes with move , the clusters must have unrestricted network access to the remote volumes. You must enable the following ports on an OpenShift Container Platform 3 cluster: 8443 (API server) 443 (routes) 53 (DNS) You must enable the following ports on an OpenShift Container Platform 4 cluster: 6443 (API server) 443 (routes) 53 (DNS) You must enable port 443 on the replication repository if you are using TLS. Persistent volumes (PVs) The PVs must be valid. The PVs must be bound to persistent volume claims. If you use snapshots to copy the PVs, the following additional prerequisites apply: The cloud provider must support snapshots. The PVs must have the same cloud provider. The PVs must be located in the same geographic region. The PVs must have the same storage class. Additional resources for migration prerequisites Manually exposing a secure registry for OpenShift Container Platform 3 Updating deprecated internal images 10.2. Migrating your applications by using the MTC web console You can configure clusters and a replication repository by using the MTC web console. Then, you can create and run a migration plan. 10.2.1. Launching the MTC web console You can launch the Migration Toolkit for Containers (MTC) web console in a browser. Prerequisites The MTC web console must have network access to the OpenShift Container Platform web console. The MTC web console must have network access to the OAuth authorization server. Procedure Log in to the OpenShift Container Platform cluster on which you have installed MTC. Obtain the MTC web console URL by entering the following command: USD oc get -n openshift-migration route/migration -o go-template='https://{{ .spec.host }}' The output resembles the following: https://migration-openshift-migration.apps.cluster.openshift.com . Launch a browser and navigate to the MTC web console. Note If you try to access the MTC web console immediately after installing the Migration Toolkit for Containers Operator, the console might not load because the Operator is still configuring the cluster. Wait a few minutes and retry. If you are using self-signed CA certificates, you will be prompted to accept the CA certificate of the source cluster API server. The web page guides you through the process of accepting the remaining certificates. Log in with your OpenShift Container Platform username and password . 10.2.2. Adding a cluster to the MTC web console You can add a cluster to the Migration Toolkit for Containers (MTC) web console. Prerequisites Cross-origin resource sharing must be configured on the source cluster. If you are using Azure snapshots to copy data: You must specify the Azure resource group name for the cluster. The clusters must be in the same Azure resource group. The clusters must be in the same geographic location. If you are using direct image migration, you must expose a route to the image registry of the source cluster. Procedure Log in to the cluster. Obtain the migration-controller service account token: USD oc create token migration-controller -n openshift-migration Example output eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJtaWciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoibWlnLXRva2VuLWs4dDJyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Im1pZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE1YjFiYWMwLWMxYmYtMTFlOS05Y2NiLTAyOWRmODYwYjMwOCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDptaWc6bWlnIn0.xqeeAINK7UXpdRqAtOj70qhBJPeMwmgLomV9iFxr5RoqUgKchZRG2J2rkqmPm6vr7K-cm7ibD1IBpdQJCcVDuoHYsFgV4mp9vgOfn9osSDp2TGikwNz4Az95e81xnjVUmzh-NjDsEpw71DH92iHV_xt2sTwtzftS49LpPW2LjrV0evtNBP_t_RfskdArt5VSv25eORl7zScqfe1CiMkcVbf2UqACQjo3LbkpfN26HAioO2oH0ECPiRzT0Xyh-KwFutJLS9Xgghyw-LD9kPKcE_xbbJ9Y4Rqajh7WdPYuB0Jd9DPVrslmzK-F6cgHHYoZEv0SvLQi-PO0rpDrcjOEQQ Log in to the MTC web console. In the MTC web console, click Clusters . Click Add cluster . Fill in the following fields: Cluster name : The cluster name can contain lower-case letters ( a-z ) and numbers ( 0-9 ). It must not contain spaces or international characters. URL : Specify the API server URL, for example, https://<www.example.com>:8443 . Service account token : Paste the migration-controller service account token. Exposed route host to image registry : If you are using direct image migration, specify the exposed route to the image registry of the source cluster. To create the route, run the following command: For OpenShift Container Platform 3: USD oc create route passthrough --service=docker-registry --port=5000 -n default For OpenShift Container Platform 4: USD oc create route passthrough --service=image-registry --port=5000 -n openshift-image-registry Azure cluster : You must select this option if you use Azure snapshots to copy your data. Azure resource group : This field is displayed if Azure cluster is selected. Specify the Azure resource group. When an {OCP} cluster is created on Microsoft Azure, an Azure Resource Group is created to contain all resources associated with the cluster. In the Azure CLI, you can display all resource groups by issuing the following command: USD az group list ResourceGroups associated with OpenShift Container Platform clusters are tagged, where sample-rg-name is the value you would extract and supply to the UI: { "id": "/subscriptions/...//resourceGroups/sample-rg-name", "location": "centralus", "name": "...", "properties": { "provisioningState": "Succeeded" }, "tags": { "kubernetes.io_cluster.sample-ld57c": "owned", "openshift_creationDate": "2019-10-25T23:28:57.988208+00:00" }, "type": "Microsoft.Resources/resourceGroups" }, This information is also available from the Azure Portal in the Resource groups blade. Require SSL verification : Optional: Select this option to verify the Secure Socket Layer (SSL) connection to the cluster. CA bundle file : This field is displayed if Require SSL verification is selected. If you created a custom CA certificate bundle file for self-signed certificates, click Browse , select the CA bundle file, and upload it. Click Add cluster . The cluster appears in the Clusters list. 10.2.3. Adding a replication repository to the MTC web console You can add an object storage as a replication repository to the Migration Toolkit for Containers (MTC) web console. MTC supports the following storage providers: Amazon Web Services (AWS) S3 Multi-Cloud Object Gateway (MCG) Generic S3 object storage, for example, Minio or Ceph S3 Google Cloud Provider (GCP) Microsoft Azure Blob Prerequisites You must configure the object storage as a replication repository. Procedure In the MTC web console, click Replication repositories . Click Add repository . Select a Storage provider type and fill in the following fields: AWS for S3 providers, including AWS and MCG: Replication repository name : Specify the replication repository name in the MTC web console. S3 bucket name : Specify the name of the S3 bucket. S3 bucket region : Specify the S3 bucket region. Required for AWS S3. Optional for some S3 providers. Check the product documentation of your S3 provider for expected values. S3 endpoint : Specify the URL of the S3 service, not the bucket, for example, https://<s3-storage.apps.cluster.com> . Required for a generic S3 provider. You must use the https:// prefix. S3 provider access key : Specify the <AWS_SECRET_ACCESS_KEY> for AWS or the S3 provider access key for MCG and other S3 providers. S3 provider secret access key : Specify the <AWS_ACCESS_KEY_ID> for AWS or the S3 provider secret access key for MCG and other S3 providers. Require SSL verification : Clear this checkbox if you are using a generic S3 provider. If you created a custom CA certificate bundle for self-signed certificates, click Browse and browse to the Base64-encoded file. GCP : Replication repository name : Specify the replication repository name in the MTC web console. GCP bucket name : Specify the name of the GCP bucket. GCP credential JSON blob : Specify the string in the credentials-velero file. Azure : Replication repository name : Specify the replication repository name in the MTC web console. Azure resource group : Specify the resource group of the Azure Blob storage. Azure storage account name : Specify the Azure Blob storage account name. Azure credentials - INI file contents : Specify the string in the credentials-velero file. Click Add repository and wait for connection validation. Click Close . The new repository appears in the Replication repositories list. 10.2.4. Creating a migration plan in the MTC web console You can create a migration plan in the Migration Toolkit for Containers (MTC) web console. Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. You must ensure that the same MTC version is installed on all clusters. You must add the clusters and the replication repository to the MTC web console. If you want to use the move data copy method to migrate a persistent volume (PV), the source and target clusters must have uninterrupted network access to the remote volume. If you want to use direct image migration, you must specify the exposed route to the image registry of the source cluster. This can be done by using the MTC web console or by updating the MigCluster custom resource manifest. Procedure In the MTC web console, click Migration plans . Click Add migration plan . Enter the Plan name . The migration plan name must not exceed 253 lower-case alphanumeric characters ( a-z, 0-9 ) and must not contain spaces or underscores ( _ ). Select a Source cluster , a Target cluster , and a Repository . Click . Select the projects for migration. Optional: Click the edit icon beside a project to change the target namespace. Click . Select a Migration type for each PV: The Copy option copies the data from the PV of a source cluster to the replication repository and then restores the data on a newly created PV, with similar characteristics, in the target cluster. The Move option unmounts a remote volume, for example, NFS, from the source cluster, creates a PV resource on the target cluster pointing to the remote volume, and then mounts the remote volume on the target cluster. Applications running on the target cluster use the same remote volume that the source cluster was using. Click . Select a Copy method for each PV: Snapshot copy backs up and restores data using the cloud provider's snapshot functionality. It is significantly faster than Filesystem copy . Filesystem copy backs up the files on the source cluster and restores them on the target cluster. The file system copy method is required for direct volume migration. You can select Verify copy to verify data migrated with Filesystem copy . Data is verified by generating a checksum for each source file and checking the checksum after restoration. Data verification significantly reduces performance. Select a Target storage class . If you selected Filesystem copy , you can change the target storage class. Click . On the Migration options page, the Direct image migration option is selected if you specified an exposed image registry route for the source cluster. The Direct PV migration option is selected if you are migrating data with Filesystem copy . The direct migration options copy images and files directly from the source cluster to the target cluster. This option is much faster than copying images and files from the source cluster to the replication repository and then from the replication repository to the target cluster. Click . Optional: Click Add Hook to add a hook to the migration plan. A hook runs custom code. You can add up to four hooks to a single migration plan. Each hook runs during a different migration step. Enter the name of the hook to display in the web console. If the hook is an Ansible playbook, select Ansible playbook and click Browse to upload the playbook or paste the contents of the playbook in the field. Optional: Specify an Ansible runtime image if you are not using the default hook image. If the hook is not an Ansible playbook, select Custom container image and specify the image name and path. A custom container image can include Ansible playbooks. Select Source cluster or Target cluster . Enter the Service account name and the Service account namespace . Select the migration step for the hook: preBackup : Before the application workload is backed up on the source cluster postBackup : After the application workload is backed up on the source cluster preRestore : Before the application workload is restored on the target cluster postRestore : After the application workload is restored on the target cluster Click Add . Click Finish . The migration plan is displayed in the Migration plans list. Additional resources MTC file system copy method MTC snapshot copy method 10.2.5. Running a migration plan in the MTC web console You can migrate applications and data with the migration plan you created in the Migration Toolkit for Containers (MTC) web console. Note During migration, MTC sets the reclaim policy of migrated persistent volumes (PVs) to Retain on the target cluster. The Backup custom resource contains a PVOriginalReclaimPolicy annotation that indicates the original reclaim policy. You can manually restore the reclaim policy of the migrated PVs. Prerequisites The MTC web console must contain the following: Source cluster in a Ready state Target cluster in a Ready state Replication repository Valid migration plan Procedure Log in to the MTC web console and click Migration plans . Click the Options menu to a migration plan and select one of the following options under Migration : Stage copies data from the source cluster to the target cluster without stopping the application. Cutover stops the transactions on the source cluster and moves the resources to the target cluster. Optional: In the Cutover migration dialog, you can clear the Halt transactions on the source cluster during migration checkbox. State copies selected persistent volume claims (PVCs). Important Do not use state migration to migrate a namespace between clusters. Use stage or cutover migration instead. Select one or more PVCs in the State migration dialog and click Migrate . When the migration is complete, verify that the application migrated successfully in the OpenShift Container Platform web console: Click Home Projects . Click the migrated project to view its status. In the Routes section, click Location to verify that the application is functioning, if applicable. Click Workloads Pods to verify that the pods are running in the migrated namespace. Click Storage Persistent volumes to verify that the migrated persistent volumes are correctly provisioned.
[ "oc get -n openshift-migration route/migration -o go-template='https://{{ .spec.host }}'", "oc create token migration-controller -n openshift-migration", "eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJtaWciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoibWlnLXRva2VuLWs4dDJyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Im1pZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE1YjFiYWMwLWMxYmYtMTFlOS05Y2NiLTAyOWRmODYwYjMwOCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDptaWc6bWlnIn0.xqeeAINK7UXpdRqAtOj70qhBJPeMwmgLomV9iFxr5RoqUgKchZRG2J2rkqmPm6vr7K-cm7ibD1IBpdQJCcVDuoHYsFgV4mp9vgOfn9osSDp2TGikwNz4Az95e81xnjVUmzh-NjDsEpw71DH92iHV_xt2sTwtzftS49LpPW2LjrV0evtNBP_t_RfskdArt5VSv25eORl7zScqfe1CiMkcVbf2UqACQjo3LbkpfN26HAioO2oH0ECPiRzT0Xyh-KwFutJLS9Xgghyw-LD9kPKcE_xbbJ9Y4Rqajh7WdPYuB0Jd9DPVrslmzK-F6cgHHYoZEv0SvLQi-PO0rpDrcjOEQQ", "oc create route passthrough --service=docker-registry --port=5000 -n default", "oc create route passthrough --service=image-registry --port=5000 -n openshift-image-registry", "az group list", "{ \"id\": \"/subscriptions/...//resourceGroups/sample-rg-name\", \"location\": \"centralus\", \"name\": \"...\", \"properties\": { \"provisioningState\": \"Succeeded\" }, \"tags\": { \"kubernetes.io_cluster.sample-ld57c\": \"owned\", \"openshift_creationDate\": \"2019-10-25T23:28:57.988208+00:00\" }, \"type\": \"Microsoft.Resources/resourceGroups\" }," ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/migrating_from_version_3_to_4/migrating-applications-3-4
function::tid
function::tid Name function::tid - Returns the thread ID of a target process Synopsis Arguments None Description This function returns the thread ID of the target process.
[ "tid:long()" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-tid
Chapter 6. Preparing an Agent-based installed cluster for the multicluster engine for Kubernetes Operator
Chapter 6. Preparing an Agent-based installed cluster for the multicluster engine for Kubernetes Operator You can install the multicluster engine Operator and deploy a hub cluster with the Agent-based OpenShift Container Platform Installer. The following procedure is partially automated and requires manual steps after the initial cluster is deployed. 6.1. Prerequisites You have read the following documentation: Cluster lifecycle with multicluster engine operator overview . Persistent storage using local volumes . Using GitOps ZTP to provision clusters at the network far edge . Preparing to install with the Agent-based Installer . About disconnected installation mirroring . You have access to the internet to obtain the necessary container images. You have installed the OpenShift CLI ( oc ). If you are installing in a disconnected environment, you must have a configured local mirror registry for disconnected installation mirroring. 6.2. Preparing an Agent-based cluster deployment for the multicluster engine for Kubernetes Operator while disconnected You can mirror the required OpenShift Container Platform container images, the multicluster engine Operator, and the Local Storage Operator (LSO) into your local mirror registry in a disconnected environment. Ensure that you note the local DNS hostname and port of your mirror registry. Note To mirror your OpenShift Container Platform image repository to your mirror registry, you can use either the oc adm release image or oc mirror command. In this procedure, the oc mirror command is used as an example. Procedure Create an <assets_directory> folder to contain valid install-config.yaml and agent-config.yaml files. This directory is used to store all the assets. To mirror an OpenShift Container Platform image repository, the multicluster engine, and the LSO, create a ImageSetConfiguration.yaml file with the following settings: Example ImageSetConfiguration.yaml kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 archiveSize: 4 1 storageConfig: 2 imageURL: <your-local-registry-dns-name>:<your-local-registry-port>/mirror/oc-mirror-metadata 3 skipTLS: true mirror: platform: architectures: - "amd64" channels: - name: stable-4.15 4 type: ocp additionalImages: - name: registry.redhat.io/ubi9/ubi:latest operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.15 5 packages: 6 - name: multicluster-engine 7 - name: local-storage-operator 8 1 Specify the maximum size, in GiB, of each file within the image set. 2 Set the back-end location to receive the image set metadata. This location can be a registry or local directory. It is required to specify storageConfig values. 3 Set the registry URL for the storage backend. 4 Set the channel that contains the OpenShift Container Platform images for the version you are installing. 5 Set the Operator catalog that contains the OpenShift Container Platform images that you are installing. 6 Specify only certain Operator packages and channels to include in the image set. Remove this field to retrieve all packages in the catalog. 7 The multicluster engine packages and channels. 8 The LSO packages and channels. Note This file is required by the oc mirror command when mirroring content. To mirror a specific OpenShift Container Platform image repository, the multicluster engine, and the LSO, run the following command: USD oc mirror --dest-skip-tls --config ocp-mce-imageset.yaml docker://<your-local-registry-dns-name>:<your-local-registry-port> Update the registry and certificate in the install-config.yaml file: Example imageContentSources.yaml imageContentSources: - source: "quay.io/openshift-release-dev/ocp-release" mirrors: - "<your-local-registry-dns-name>:<your-local-registry-port>/openshift/release-images" - source: "quay.io/openshift-release-dev/ocp-v4.0-art-dev" mirrors: - "<your-local-registry-dns-name>:<your-local-registry-port>/openshift/release" - source: "registry.redhat.io/ubi9" mirrors: - "<your-local-registry-dns-name>:<your-local-registry-port>/ubi9" - source: "registry.redhat.io/multicluster-engine" mirrors: - "<your-local-registry-dns-name>:<your-local-registry-port>/multicluster-engine" - source: "registry.redhat.io/rhel8" mirrors: - "<your-local-registry-dns-name>:<your-local-registry-port>/rhel8" - source: "registry.redhat.io/redhat" mirrors: - "<your-local-registry-dns-name>:<your-local-registry-port>/redhat" Additionally, ensure your certificate is present in the additionalTrustBundle field of the install-config.yaml . Example install-config.yaml additionalTrustBundle: | -----BEGIN CERTIFICATE----- zzzzzzzzzzz -----END CERTIFICATE------- Important The oc mirror command creates a folder called oc-mirror-workspace with several outputs. This includes the imageContentSourcePolicy.yaml file that identifies all the mirrors you need for OpenShift Container Platform and your selected Operators. Generate the cluster manifests by running the following command: USD openshift-install agent create cluster-manifests This command updates the cluster manifests folder to include a mirror folder that contains your mirror configuration. 6.3. Preparing an Agent-based cluster deployment for the multicluster engine for Kubernetes Operator while connected Create the required manifests for the multicluster engine Operator, the Local Storage Operator (LSO), and to deploy an agent-based OpenShift Container Platform cluster as a hub cluster. Procedure Create a sub-folder named openshift in the <assets_directory> folder. This sub-folder is used to store the extra manifests that will be applied during the installation to further customize the deployed cluster. The <assets_directory> folder contains all the assets including the install-config.yaml and agent-config.yaml files. Note The installer does not validate extra manifests. For the multicluster engine, create the following manifests and save them in the <assets_directory>/openshift folder: Example mce_namespace.yaml apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: "true" name: multicluster-engine Example mce_operatorgroup.yaml apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: multicluster-engine-operatorgroup namespace: multicluster-engine spec: targetNamespaces: - multicluster-engine Example mce_subscription.yaml apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: multicluster-engine namespace: multicluster-engine spec: channel: "stable-2.3" name: multicluster-engine source: redhat-operators sourceNamespace: openshift-marketplace Note You can install a distributed unit (DU) at scale with the Red Hat Advanced Cluster Management (RHACM) using the assisted installer (AI). These distributed units must be enabled in the hub cluster. The AI service requires persistent volumes (PVs), which are manually created. For the AI service, create the following manifests and save them in the <assets_directory>/openshift folder: Example lso_namespace.yaml apiVersion: v1 kind: Namespace metadata: annotations: openshift.io/cluster-monitoring: "true" name: openshift-local-storage Example lso_operatorgroup.yaml apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: local-operator-group namespace: openshift-local-storage spec: targetNamespaces: - openshift-local-storage Example lso_subscription.yaml apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: local-storage-operator namespace: openshift-local-storage spec: installPlanApproval: Automatic name: local-storage-operator source: redhat-operators sourceNamespace: openshift-marketplace Note After creating all the manifests, your filesystem must display as follows: Example Filesystem <assets_directory> ├─ install-config.yaml ├─ agent-config.yaml └─ /openshift ├─ mce_namespace.yaml ├─ mce_operatorgroup.yaml ├─ mce_subscription.yaml ├─ lso_namespace.yaml ├─ lso_operatorgroup.yaml └─ lso_subscription.yaml Create the agent ISO image by running the following command: USD openshift-install agent create image --dir <assets_directory> When the image is ready, boot the target machine and wait for the installation to complete. To monitor the installation, run the following command: USD openshift-install agent wait-for install-complete --dir <assets_directory> Note To configure a fully functional hub cluster, you must create the following manifests and manually apply them by running the command USD oc apply -f <manifest-name> . The order of the manifest creation is important and where required, the waiting condition is displayed. For the PVs that are required by the AI service, create the following manifests: apiVersion: local.storage.openshift.io/v1 kind: LocalVolume metadata: name: assisted-service namespace: openshift-local-storage spec: logLevel: Normal managementState: Managed storageClassDevices: - devicePaths: - /dev/vda - /dev/vdb storageClassName: assisted-service volumeMode: Filesystem Use the following command to wait for the availability of the PVs, before applying the subsequent manifests: USD oc wait localvolume -n openshift-local-storage assisted-service --for condition=Available --timeout 10m Note Create a manifest for a multicluster engine instance. Example MultiClusterEngine.yaml apiVersion: multicluster.openshift.io/v1 kind: MultiClusterEngine metadata: name: multiclusterengine spec: {} Create a manifest to enable the AI service. Example agentserviceconfig.yaml apiVersion: agent-install.openshift.io/v1beta1 kind: AgentServiceConfig metadata: name: agent namespace: assisted-installer spec: databaseStorage: storageClassName: assisted-service accessModes: - ReadWriteOnce resources: requests: storage: 10Gi filesystemStorage: storageClassName: assisted-service accessModes: - ReadWriteOnce resources: requests: storage: 10Gi Create a manifest to deploy subsequently spoke clusters. Example clusterimageset.yaml apiVersion: hive.openshift.io/v1 kind: ClusterImageSet metadata: name: "4.15" spec: releaseImage: quay.io/openshift-release-dev/ocp-release:4.15.0-x86_64 Create a manifest to import the agent installed cluster (that hosts the multicluster engine and the Assisted Service) as the hub cluster. Example autoimport.yaml apiVersion: cluster.open-cluster-management.io/v1 kind: ManagedCluster metadata: labels: local-cluster: "true" cloud: auto-detect vendor: auto-detect name: local-cluster spec: hubAcceptsClient: true Wait for the managed cluster to be created. USD oc wait -n multicluster-engine managedclusters local-cluster --for condition=ManagedClusterJoined=True --timeout 10m Verification To confirm that the managed cluster installation is successful, run the following command: USD oc get managedcluster NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE local-cluster true https://<your cluster url>:6443 True True 77m Additional resources The Local Storage Operator
[ "kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 archiveSize: 4 1 storageConfig: 2 imageURL: <your-local-registry-dns-name>:<your-local-registry-port>/mirror/oc-mirror-metadata 3 skipTLS: true mirror: platform: architectures: - \"amd64\" channels: - name: stable-4.15 4 type: ocp additionalImages: - name: registry.redhat.io/ubi9/ubi:latest operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.15 5 packages: 6 - name: multicluster-engine 7 - name: local-storage-operator 8", "oc mirror --dest-skip-tls --config ocp-mce-imageset.yaml docker://<your-local-registry-dns-name>:<your-local-registry-port>", "imageContentSources: - source: \"quay.io/openshift-release-dev/ocp-release\" mirrors: - \"<your-local-registry-dns-name>:<your-local-registry-port>/openshift/release-images\" - source: \"quay.io/openshift-release-dev/ocp-v4.0-art-dev\" mirrors: - \"<your-local-registry-dns-name>:<your-local-registry-port>/openshift/release\" - source: \"registry.redhat.io/ubi9\" mirrors: - \"<your-local-registry-dns-name>:<your-local-registry-port>/ubi9\" - source: \"registry.redhat.io/multicluster-engine\" mirrors: - \"<your-local-registry-dns-name>:<your-local-registry-port>/multicluster-engine\" - source: \"registry.redhat.io/rhel8\" mirrors: - \"<your-local-registry-dns-name>:<your-local-registry-port>/rhel8\" - source: \"registry.redhat.io/redhat\" mirrors: - \"<your-local-registry-dns-name>:<your-local-registry-port>/redhat\"", "additionalTrustBundle: | -----BEGIN CERTIFICATE----- zzzzzzzzzzz -----END CERTIFICATE-------", "openshift-install agent create cluster-manifests", "apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: \"true\" name: multicluster-engine", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: multicluster-engine-operatorgroup namespace: multicluster-engine spec: targetNamespaces: - multicluster-engine", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: multicluster-engine namespace: multicluster-engine spec: channel: \"stable-2.3\" name: multicluster-engine source: redhat-operators sourceNamespace: openshift-marketplace", "apiVersion: v1 kind: Namespace metadata: annotations: openshift.io/cluster-monitoring: \"true\" name: openshift-local-storage", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: local-operator-group namespace: openshift-local-storage spec: targetNamespaces: - openshift-local-storage", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: local-storage-operator namespace: openshift-local-storage spec: installPlanApproval: Automatic name: local-storage-operator source: redhat-operators sourceNamespace: openshift-marketplace", "<assets_directory> ├─ install-config.yaml ├─ agent-config.yaml └─ /openshift ├─ mce_namespace.yaml ├─ mce_operatorgroup.yaml ├─ mce_subscription.yaml ├─ lso_namespace.yaml ├─ lso_operatorgroup.yaml └─ lso_subscription.yaml", "openshift-install agent create image --dir <assets_directory>", "openshift-install agent wait-for install-complete --dir <assets_directory>", "apiVersion: local.storage.openshift.io/v1 kind: LocalVolume metadata: name: assisted-service namespace: openshift-local-storage spec: logLevel: Normal managementState: Managed storageClassDevices: - devicePaths: - /dev/vda - /dev/vdb storageClassName: assisted-service volumeMode: Filesystem", "oc wait localvolume -n openshift-local-storage assisted-service --for condition=Available --timeout 10m", "The `devicePath` is an example and may vary depending on the actual hardware configuration used.", "apiVersion: multicluster.openshift.io/v1 kind: MultiClusterEngine metadata: name: multiclusterengine spec: {}", "apiVersion: agent-install.openshift.io/v1beta1 kind: AgentServiceConfig metadata: name: agent namespace: assisted-installer spec: databaseStorage: storageClassName: assisted-service accessModes: - ReadWriteOnce resources: requests: storage: 10Gi filesystemStorage: storageClassName: assisted-service accessModes: - ReadWriteOnce resources: requests: storage: 10Gi", "apiVersion: hive.openshift.io/v1 kind: ClusterImageSet metadata: name: \"4.15\" spec: releaseImage: quay.io/openshift-release-dev/ocp-release:4.15.0-x86_64", "apiVersion: cluster.open-cluster-management.io/v1 kind: ManagedCluster metadata: labels: local-cluster: \"true\" cloud: auto-detect vendor: auto-detect name: local-cluster spec: hubAcceptsClient: true", "oc wait -n multicluster-engine managedclusters local-cluster --for condition=ManagedClusterJoined=True --timeout 10m", "oc get managedcluster NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE local-cluster true https://<your cluster url>:6443 True True 77m" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_an_on-premise_cluster_with_the_agent-based_installer/preparing-an-agent-based-installed-cluster-for-the-multicluster-engine-for-kubernetes
Chapter 5. NVIDIA GPU architecture overview
Chapter 5. NVIDIA GPU architecture overview NVIDIA supports the use of graphics processing unit (GPU) resources on Red Hat OpenShift Service on AWS. Red Hat OpenShift Service on AWS is a security-focused and hardened Kubernetes platform developed and supported by Red Hat for deploying and managing Kubernetes clusters at scale. Red Hat OpenShift Service on AWS includes enhancements to Kubernetes so that users can easily configure and use NVIDIA GPU resources to accelerate workloads. The NVIDIA GPU Operator leverages the Operator framework within Red Hat OpenShift Service on AWS to manage the full lifecycle of NVIDIA software components required to run GPU-accelerated workloads. These components include the NVIDIA drivers (to enable CUDA), the Kubernetes device plugin for GPUs, the NVIDIA Container Toolkit, automatic node tagging using GPU feature discovery (GFD), DCGM-based monitoring, and others. Note The NVIDIA GPU Operator is only supported by NVIDIA. For more information about obtaining support from NVIDIA, see Obtaining Support from NVIDIA . 5.1. NVIDIA GPU prerequisites A working OpenShift cluster with at least one GPU worker node. Access to the OpenShift cluster as a cluster-admin to perform the required steps. OpenShift CLI ( oc ) is installed. The node feature discovery (NFD) Operator is installed and a nodefeaturediscovery instance is created. 5.2. GPUs and ROSA You can deploy Red Hat OpenShift Service on AWS on NVIDIA GPU instance types. It is important that this compute instance is a GPU-accelerated compute instance and that the GPU type matches the list of supported GPUs from NVIDIA AI Enterprise. For example, T4, V100, and A100 are part of this list. You can choose one of the following methods to access the containerized GPUs: GPU passthrough to access and use GPU hardware within a virtual machine (VM). GPU (vGPU) time slicing when the entire GPU is not required. Additional resources Red Hat Openshift in the Cloud 5.3. GPU sharing methods Red Hat and NVIDIA have developed GPU concurrency and sharing mechanisms to simplify GPU-accelerated computing on an enterprise-level Red Hat OpenShift Service on AWS cluster. Applications typically have different compute requirements that can leave GPUs underutilized. Providing the right amount of compute resources for each workload is critical to reduce deployment cost and maximize GPU utilization. Concurrency mechanisms for improving GPU utilization exist that range from programming model APIs to system software and hardware partitioning, including virtualization. The following list shows the GPU concurrency mechanisms: Compute Unified Device Architecture (CUDA) streams Time-slicing CUDA Multi-Process Service (MPS) Multi-instance GPU (MIG) Virtualization with vGPU Additional resources Improving GPU Utilization 5.3.1. CUDA streams Compute Unified Device Architecture (CUDA) is a parallel computing platform and programming model developed by NVIDIA for general computing on GPUs. A stream is a sequence of operations that executes in issue-order on the GPU. CUDA commands are typically executed sequentially in a default stream and a task does not start until a preceding task has completed. Asynchronous processing of operations across different streams allows for parallel execution of tasks. A task issued in one stream runs before, during, or after another task is issued into another stream. This allows the GPU to run multiple tasks simultaneously in no prescribed order, leading to improved performance. Additional resources Asynchronous Concurrent Execution 5.3.2. Time-slicing GPU time-slicing interleaves workloads scheduled on overloaded GPUs when you are running multiple CUDA applications. You can enable time-slicing of GPUs on Kubernetes by defining a set of replicas for a GPU, each of which can be independently distributed to a pod to run workloads on. Unlike multi-instance GPU (MIG), there is no memory or fault isolation between replicas, but for some workloads this is better than not sharing at all. Internally, GPU time-slicing is used to multiplex workloads from replicas of the same underlying GPU. You can apply a cluster-wide default configuration for time-slicing. You can also apply node-specific configurations. For example, you can apply a time-slicing configuration only to nodes with Tesla T4 GPUs and not modify nodes with other GPU models. You can combine these two approaches by applying a cluster-wide default configuration and then labeling nodes to give those nodes a node-specific configuration. 5.3.3. CUDA Multi-Process Service CUDA Multi-Process Service (MPS) allows a single GPU to use multiple CUDA processes. The processes run in parallel on the GPU, eliminating saturation of the GPU compute resources. MPS also enables concurrent execution, or overlapping, of kernel operations and memory copying from different processes to enhance utilization. Additional resources CUDA MPS 5.3.4. Multi-instance GPU Using Multi-instance GPU (MIG), you can split GPU compute units and memory into multiple MIG instances. Each of these instances represents a standalone GPU device from a system perspective and can be connected to any application, container, or virtual machine running on the node. The software that uses the GPU treats each of these MIG instances as an individual GPU. MIG is useful when you have an application that does not require the full power of an entire GPU. The MIG feature of the new NVIDIA Ampere architecture enables you to split your hardware resources into multiple GPU instances, each of which is available to the operating system as an independent CUDA-enabled GPU. NVIDIA GPU Operator version 1.7.0 and higher provides MIG support for the A100 and A30 Ampere cards. These GPU instances are designed to support up to seven multiple independent CUDA applications so that they operate completely isolated with dedicated hardware resources. Additional resources NVIDIA Multi-Instance GPU User Guide 5.3.5. Virtualization with vGPU Virtual machines (VMs) can directly access a single physical GPU using NVIDIA vGPU. You can create virtual GPUs that can be shared by VMs across the enterprise and accessed by other devices. This capability combines the power of GPU performance with the management and security benefits provided by vGPU. Additional benefits provided by vGPU includes proactive management and monitoring for your VM environment, workload balancing for mixed VDI and compute workloads, and resource sharing across multiple VMs. Additional resources Virtual GPUs 5.4. NVIDIA GPU features for Red Hat OpenShift Service on AWS NVIDIA Container Toolkit NVIDIA Container Toolkit enables you to create and run GPU-accelerated containers. The toolkit includes a container runtime library and utilities to automatically configure containers to use NVIDIA GPUs. NVIDIA AI Enterprise NVIDIA AI Enterprise is an end-to-end, cloud-native suite of AI and data analytics software optimized, certified, and supported with NVIDIA-Certified systems. NVIDIA AI Enterprise includes support for Red Hat Red Hat OpenShift Service on AWS. The following installation methods are supported: Red Hat OpenShift Service on AWS on bare metal or VMware vSphere with GPU Passthrough. Red Hat OpenShift Service on AWS on VMware vSphere with NVIDIA vGPU. GPU Feature Discovery NVIDIA GPU Feature Discovery for Kubernetes is a software component that enables you to automatically generate labels for the GPUs available on a node. GPU Feature Discovery uses node feature discovery (NFD) to perform this labeling. The Node Feature Discovery Operator (NFD) manages the discovery of hardware features and configurations in an OpenShift Container Platform cluster by labeling nodes with hardware-specific information. NFD labels the host with node-specific attributes, such as PCI cards, kernel, OS version, and so on. You can find the NFD Operator in the Operator Hub by searching for "Node Feature Discovery". NVIDIA GPU Operator with OpenShift Virtualization Up until this point, the GPU Operator only provisioned worker nodes to run GPU-accelerated containers. Now, the GPU Operator can also be used to provision worker nodes for running GPU-accelerated virtual machines (VMs). You can configure the GPU Operator to deploy different software components to worker nodes depending on which GPU workload is configured to run on those nodes. GPU Monitoring dashboard You can install a monitoring dashboard to display GPU usage information on the cluster Observe page in the Red Hat OpenShift Service on AWS web console. GPU utilization information includes the number of available GPUs, power consumption (in watts), temperature (in degrees Celsius), utilization (in percent), and other metrics for each GPU. Additional resources NVIDIA-Certified Systems NVIDIA AI Enterprise NVIDIA Container Toolkit Enabling the GPU Monitoring Dashboard MIG Support in OpenShift Container Platform Time-slicing NVIDIA GPUs in OpenShift Deploy GPU Operators in a disconnected or airgapped environment
null
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/architecture/nvidia-gpu-architecture-overview
Chapter 4. Using Infoblox as DHCP and DNS Providers
Chapter 4. Using Infoblox as DHCP and DNS Providers You can use Capsule Server to connect to your Infoblox application to create and manage DHCP and DNS records, and to reserve IP addresses. The supported Infoblox version is NIOS 8.0 or higher and Satellite 6.11 or higher. 4.1. Infoblox Limitations All DHCP and DNS records can be managed only in a single Network or DNS view. After you install the Infoblox modules on Capsule and set up the view using the satellite-installer command, you cannot edit the view. Capsule Server communicates with a single Infoblox node using the standard HTTPS web API. If you want to configure clustering and High Availability, make the configurations in Infoblox. Hosting PXE-related files using Infoblox's TFTP functionality is not supported. You must use Capsule as a TFTP server for PXE provisioning. For more information, see Chapter 3, Configuring Networking . Satellite IPAM feature cannot be integrated with Infoblox. 4.2. Infoblox Prerequisites You must have Infoblox account credentials to manage DHCP and DNS entries in Satellite. Ensure that you have Infoblox administration roles with the names: DHCP Admin and DNS Admin . The administration roles must have permissions or belong to an admin group that permits the accounts to perform tasks through the Infoblox API. 4.3. Installing the Infoblox CA Certificate on Capsule Server You must install Infoblox HTTPS CA certificate on the base system for all Capsules that you want to integrate with Infoblox applications. You can download the certificate from the Infoblox web UI, or you can use the following OpenSSL commands to download the certificate: The infoblox.example.com entry must match the host name for the Infoblox application in the X509 certificate. To test the CA certificate, use a CURL query: Example positive response: Use the following Red Hat Knowledgebase article to install the certificate: How to install a CA certificate on Red Hat Enterprise Linux 6 / 7 . 4.4. Installing the DHCP Infoblox module Use this procedure to install the DHCP Infoblox module on Capsule. Note that you cannot manage records in separate views. You can also install DHCP and DNS Infoblox modules simultaneously by combining this procedure and Section 4.5, "Installing the DNS Infoblox Module" . DHCP Infoblox Record Type Considerations Use only the --foreman-proxy-plugin-dhcp-infoblox-record-type fixedaddress option to configure the DHCP and DNS modules. Configuring both DHCP and DNS Infoblox modules with the host record type setting causes DNS conflicts and is not supported. If you install the Infoblox module on Capsule Server with the --foreman-proxy-plugin-dhcp-infoblox-record-type option set to host , you must unset both DNS Capsule and Reverse DNS Capsule options because Infoblox does the DNS management itself. You cannot use the host option without creating conflicts and, for example, being unable to rename hosts in Satellite. Procedure On Capsule, enter the following command: In the Satellite web UI, navigate to Infrastructure > Capsules and select the Capsule with the Infoblox DHCP module and click Refresh . Ensure that the dhcp features are listed. For all domains managed through Infoblox, ensure that the DNS Capsule is set for that domain. To verify, in the Satellite web UI, navigate to Infrastructure > Domains , and inspect the settings of each domain. For all subnets managed through Infoblox, ensure that DHCP Capsule and Reverse DNS Capsule is set. To verify, in the Satellite web UI, navigate to Infrastructure > Subnets , and inspect the settings of each subnet. 4.5. Installing the DNS Infoblox Module Use this procedure to install the DNS Infoblox module on Capsule. You can also install DHCP and DNS Infoblox modules simultaneously by combining this procedure and Section 4.4, "Installing the DHCP Infoblox module" . DNS records are managed only in the default DNS view, it's not possible to specify which DNS view to use. Procedure On Capsule, enter the following command to configure the Infoblox module: Optionally, you can change the value of the --foreman-proxy-plugin-dns-infoblox-dns-view option to specify a DNS Infoblox view other than the default view. In the Satellite web UI, navigate to Infrastructure > Capsules and select the Capsule with the Infoblox DNS module and click Refresh . Ensure that the dns features are listed.
[ "update-ca-trust enable openssl s_client -showcerts -connect infoblox.example.com :443 </dev/null | openssl x509 -text >/etc/pki/ca-trust/source/anchors/infoblox.crt update-ca-trust extract", "curl -u admin:password https:// infoblox.example.com /wapi/v2.0/network", "[ { \"_ref\": \"network/ZG5zLm5ldHdvcmskMTkyLjE2OC4yMDIuMC8yNC8w: infoblox.example.com /24/default\", \"network\": \"192.168.202.0/24\", \"network_view\": \"default\" } ]", "satellite-installer --enable-foreman-proxy-plugin-dhcp-infoblox --foreman-proxy-dhcp true --foreman-proxy-dhcp-managed false --foreman-proxy-dhcp-provider infoblox --foreman-proxy-dhcp-server infoblox.example.com --foreman-proxy-plugin-dhcp-infoblox-dns-view default --foreman-proxy-plugin-dhcp-infoblox-network-view default --foreman-proxy-plugin-dhcp-infoblox-password infoblox --foreman-proxy-plugin-dhcp-infoblox-record-type fixedaddress --foreman-proxy-plugin-dhcp-infoblox-username admin", "satellite-installer --enable-foreman-proxy-plugin-dns-infoblox --foreman-proxy-dns true --foreman-proxy-dns-provider infoblox --foreman-proxy-plugin-dns-infoblox-dns-server infoblox.example.com --foreman-proxy-plugin-dns-infoblox-dns-view default --foreman-proxy-plugin-dns-infoblox-password infoblox --foreman-proxy-plugin-dns-infoblox-username admin" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/provisioning_hosts/using_infoblox_as_dhcp_and_dns_providers_provisioning
4.134. libgpg-error
4.134. libgpg-error 4.134.1. RHBA-2011:1717 - libgpg-error enhancement update An updated libgpg-error package is now available for Red Hat Enterprise Linux 6. The libgpg-error library provides a set of common error codes and definitions which are shared by the gnupg, libgcrypt and other packages. Enhancement BZ# 727287 Previously, the libgpg-error package was compiled without the RELRO (read-only relocations) flag. Programs provided by this package were thus vulnerable to various attacks based on overwriting the ELF section of a program. To increase the security of the libgpg-error library, the libgpg-error spec file has been modified to use the "-Wl,-z,relro" flags when compiling the package. As a result, the libgpg-error package is now provided with partial RELRO protection. Users of libgpg-error are advised to upgrade to this updated package, which adds this enhancement.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/libgpg-error
Chapter 12. Configuring the vSphere connection settings after an installation
Chapter 12. Configuring the vSphere connection settings after an installation After installing an OpenShift Container Platform cluster on vSphere with the platform integration feature enabled, you might need to update the vSphere connection settings manually, depending on the installation method. For installations using the Assisted Installer, you must update the connection settings. This is because the Assisted Installer adds default connection settings to the vSphere connection configuration wizard as placeholders during the installation. For installer-provisioned or user-provisioned infrastructure installations, you should have entered valid connection settings during the installation. You can use the vSphere connection configuration wizard at any time to validate or modify the connection settings, but this is not mandatory for completing the installation. 12.1. Configuring the vSphere connection settings Modify the following vSphere configuration settings as required: vCenter address vCenter cluster vCenter username vCenter password vCenter address vSphere data center vSphere datastore Virtual machine folder Prerequisites The Assisted Installer has finished installing the cluster successfully. The cluster is connected to https://console.redhat.com . Procedure In the Administrator perspective, navigate to Home Overview . Under Status , click vSphere connection to open the vSphere connection configuration wizard. In the vCenter field, enter the network address of the vSphere vCenter server. This can be either a domain name or an IP address. It appears in the vSphere web client URL; for example https://[your_vCenter_address]/ui . In the vCenter cluster field, enter the name of the vSphere vCenter cluster where OpenShift Container Platform is installed. Important This step is mandatory if you installed OpenShift Container Platform 4.13 or later. In the Username field, enter your vSphere vCenter username. In the Password field, enter your vSphere vCenter password. Warning The system stores the username and password in the vsphere-creds secret in the kube-system namespace of the cluster. An incorrect vCenter username or password makes the cluster nodes unschedulable. In the Datacenter field, enter the name of the vSphere data center that contains the virtual machines used to host the cluster; for example, SDDC-Datacenter . In the Default data store field, enter the path and name of the vSphere data store that stores the persistent data volumes; for example, /SDDC-Datacenter/datastore/datastorename . Warning Updating the vSphere data center or default data store after the configuration has been saved detaches any active vSphere PersistentVolumes . In the Virtual Machine Folder field, enter the data center folder that contains the virtual machine of the cluster; for example, /SDDC-Datacenter/vm/ci-ln-hjg4vg2-c61657-t2gzr . For the OpenShift Container Platform installation to succeed, all virtual machines comprising the cluster must be located in a single data center folder. Click Save Configuration . This updates the cloud-provider-config ConfigMap resource in the openshift-config namespace, and starts the configuration process. Reopen the vSphere connection configuration wizard and expand the Monitored operators panel. Check that the status of the operators is either Progressing or Healthy . 12.2. Verifying the configuration The connection configuration process updates operator statuses and control plane nodes. It takes approximately an hour to complete. During the configuration process, the nodes will reboot. Previously bound PersistentVolumeClaims objects might become disconnected. Prerequisites You have saved the configuration settings in the vSphere connection configuration wizard. Procedure Check that the configuration process completed successfully: In the OpenShift Container Platform Administrator perspective, navigate to Home Overview . Under Status click Operators . Wait for all operator statuses to change from Progressing to All succeeded . A Failed status indicates that the configuration failed. Under Status , click Control Plane . Wait for the response rate of all Control Pane components to return to 100%. A Failed control plane component indicates that the configuration failed. A failure indicates that at least one of the connection settings is incorrect. Change the settings in the vSphere connection configuration wizard and save the configuration again. Check that you are able to bind PersistentVolumeClaims objects by performing the following steps: Create a StorageClass object using the following YAML: kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: vsphere-sc provisioner: kubernetes.io/vsphere-volume parameters: datastore: YOURVCENTERDATASTORE diskformat: thin reclaimPolicy: Delete volumeBindingMode: Immediate Create a PersistentVolumeClaims object using the following YAML: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: test-pvc namespace: openshift-config annotations: volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/vsphere-volume finalizers: - kubernetes.io/pvc-protection spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: vsphere-sc volumeMode: Filesystem If you are unable to create a PersistentVolumeClaims object, you can troubleshoot by navigating to Storage PersistentVolumeClaims in the Administrator perspective of the OpenShift Container Platform web console. For instructions on creating storage objects, see Dynamic provisioning .
[ "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: vsphere-sc provisioner: kubernetes.io/vsphere-volume parameters: datastore: YOURVCENTERDATASTORE diskformat: thin reclaimPolicy: Delete volumeBindingMode: Immediate", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: test-pvc namespace: openshift-config annotations: volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/vsphere-volume finalizers: - kubernetes.io/pvc-protection spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: vsphere-sc volumeMode: Filesystem" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/installing_on_vmware_vsphere/installing-vsphere-post-installation-configuration
Chapter 10. Installing a cluster on AWS into a Secret or Top Secret Region
Chapter 10. Installing a cluster on AWS into a Secret or Top Secret Region In OpenShift Container Platform version 4.14, you can install a cluster on Amazon Web Services (AWS) into the following secret regions: Secret Commercial Cloud Services (SC2S) Commercial Cloud Services (C2S) To configure a cluster in either region, you change parameters in the install config.yaml file before you install the cluster. 10.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an AWS account to host the cluster. Important If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multifactor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use long-term credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program. If you use a firewall, you configured it to allow the sites that your cluster requires access to. 10.2. AWS secret regions The following AWS secret partitions are supported: us-isob-east-1 (SC2S) us-iso-east-1 (C2S) Note The maximum supported MTU in an AWS SC2S and C2S Regions is not the same as AWS commercial. For more information about configuring MTU during installation, see the Cluster Network Operator configuration object section in Installing a cluster on AWS with network customizations 10.3. Installation requirements Red Hat does not publish a Red Hat Enterprise Linux CoreOS (RHCOS) Amzaon Machine Image for the AWS Secret and Top Secret Regions. Before you can install the cluster, you must: Upload a custom RHCOS AMI. Manually create the installation configuration file ( install-config.yaml ). Specify the AWS region, and the accompanying custom AMI, in the installation configuration file. You cannot use the OpenShift Container Platform installation program to create the installation configuration file. The installer does not list an AWS region without native support for an RHCOS AMI. Important You must also define a custom CA certificate in the additionalTrustBundle field of the install-config.yaml file because the AWS API requires a custom CA trust bundle. To allow the installation program to access the AWS API, the CA certificates must also be defined on the machine that runs the installation program. You must add the CA bundle to the trust store on the machine, use the AWS_CA_BUNDLE environment variable, or define the CA bundle in the ca_bundle field of the AWS config file. 10.4. Private clusters You can deploy a private OpenShift Container Platform cluster that does not expose external endpoints. Private clusters are accessible from only an internal network and are not visible to the internet. Note Public zones are not supported in Route 53 in an AWS Top Secret Region. Therefore, clusters must be private if they are deployed to an AWS Top Secret Region. By default, OpenShift Container Platform is provisioned to use publicly-accessible DNS and endpoints. A private cluster sets the DNS, Ingress Controller, and API server to private when you deploy your cluster. This means that the cluster resources are only accessible from your internal network and are not visible to the internet. Important If the cluster has any public subnets, load balancer services created by administrators might be publicly accessible. To ensure cluster security, verify that these services are explicitly annotated as private. To deploy a private cluster, you must: Use existing networking that meets your requirements. Your cluster resources might be shared between other clusters on the network. Deploy from a machine that has access to: The API services for the cloud to which you provision. The hosts on the network that you provision. The internet to obtain installation media. You can use any machine that meets these access requirements and follows your company's guidelines. For example, this machine can be a bastion host on your cloud network or a machine that has access to the network through a VPN. 10.4.1. Private clusters in AWS To create a private cluster on Amazon Web Services (AWS), you must provide an existing private VPC and subnets to host the cluster. The installation program must also be able to resolve the DNS records that the cluster requires. The installation program configures the Ingress Operator and API server for access from only the private network. The cluster still requires access to internet to access the AWS APIs. The following items are not required or created when you install a private cluster: Public subnets Public load balancers, which support public ingress A public Route 53 zone that matches the baseDomain for the cluster The installation program does use the baseDomain that you specify to create a private Route 53 zone and the required records for the cluster. The cluster is configured so that the Operators do not create public records for the cluster and all cluster machines are placed in the private subnets that you specify. 10.4.1.1. Limitations The ability to add public functionality to a private cluster is limited. You cannot make the Kubernetes API endpoints public after installation without taking additional actions, including creating public subnets in the VPC for each availability zone in use, creating a public load balancer, and configuring the control plane security groups to allow traffic from the internet on 6443 (Kubernetes API port). If you use a public Service type load balancer, you must tag a public subnet in each availability zone with kubernetes.io/cluster/<cluster-infra-id>: shared so that AWS can use them to create public load balancers. 10.5. About using a custom VPC In OpenShift Container Platform 4.14, you can deploy a cluster into existing subnets in an existing Amazon Virtual Private Cloud (VPC) in Amazon Web Services (AWS). By deploying OpenShift Container Platform into an existing AWS VPC, you might be able to avoid limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. If you cannot obtain the infrastructure creation permissions that are required to create the VPC yourself, use this installation option. Because the installation program cannot know what other components are also in your existing subnets, it cannot choose subnet CIDRs and so forth on your behalf. You must configure networking for the subnets that you install your cluster to yourself. 10.5.1. Requirements for using your VPC The installation program no longer creates the following components: Internet gateways NAT gateways Subnets Route tables VPCs VPC DHCP options VPC endpoints Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. If you use a custom VPC, you must correctly configure it and its subnets for the installation program and the cluster to use. See Amazon VPC console wizard configurations and Work with VPCs and subnets in the AWS documentation for more information on creating and managing an AWS VPC. The installation program cannot: Subdivide network ranges for the cluster to use. Set route tables for the subnets. Set VPC options like DHCP. You must complete these tasks before you install the cluster. See VPC networking components and Route tables for your VPC for more information on configuring networking in an AWS VPC. Your VPC must meet the following characteristics: The VPC must not use the kubernetes.io/cluster/.*: owned , Name , and openshift.io/cluster tags. The installation program modifies your subnets to add the kubernetes.io/cluster/.*: shared tag, so your subnets must have at least one free tag slot available for it. See Tag Restrictions in the AWS documentation to confirm that the installation program can add a tag to each subnet that you specify. You cannot use a Name tag, because it overlaps with the EC2 Name field and the installation fails. You must enable the enableDnsSupport and enableDnsHostnames attributes in your VPC, so that the cluster can use the Route 53 zones that are attached to the VPC to resolve cluster's internal DNS records. See DNS Support in Your VPC in the AWS documentation. If you prefer to use your own Route 53 hosted private zone, you must associate the existing hosted zone with your VPC prior to installing a cluster. You can define your hosted zone using the platform.aws.hostedZone and platform.aws.hostedZoneRole fields in the install-config.yaml file. You can use a private hosted zone from another account by sharing it with the account where you install the cluster. If you use a private hosted zone from another account, you must use the Passthrough or Manual credentials mode. A cluster in an SC2S or C2S Region is unable to reach the public IP addresses for the EC2, ELB, and S3 endpoints. Depending on the level to which you want to restrict internet traffic during the installation, the following configuration options are available: Option 1: Create VPC endpoints Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: SC2S elasticloadbalancing.<aws_region>.sc2s.sgov.gov ec2.<aws_region>.sc2s.sgov.gov s3.<aws_region>.sc2s.sgov.gov C2S elasticloadbalancing.<aws_region>.c2s.ic.gov ec2.<aws_region>.c2s.ic.gov s3.<aws_region>.c2s.ic.gov With this option, network traffic remains private between your VPC and the required AWS services. Option 2: Create a proxy without VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy. With this option, internet traffic goes through the proxy to reach the required AWS services. Option 3: Create a proxy with VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy with VPC endpoints. Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: SC2S elasticloadbalancing.<aws_region>.sc2s.sgov.gov ec2.<aws_region>.sc2s.sgov.gov s3.<aws_region>.sc2s.sgov.gov C2S elasticloadbalancing.<aws_region>.c2s.ic.gov ec2.<aws_region>.c2s.ic.gov s3.<aws_region>.c2s.ic.gov When configuring the proxy in the install-config.yaml file, add these endpoints to the noProxy field. With this option, the proxy prevents the cluster from accessing the internet directly. However, network traffic remains private between your VPC and the required AWS services. Required VPC components You must provide a suitable VPC and subnets that allow communication to your machines. Component AWS type Description VPC AWS::EC2::VPC AWS::EC2::VPCEndpoint You must provide a public VPC for the cluster to use. The VPC uses an endpoint that references the route tables for each subnet to improve communication with the registry that is hosted in S3. Public subnets AWS::EC2::Subnet AWS::EC2::SubnetNetworkAclAssociation Your VPC must have public subnets for between 1 and 3 availability zones and associate them with appropriate Ingress rules. Internet gateway AWS::EC2::InternetGateway AWS::EC2::VPCGatewayAttachment AWS::EC2::RouteTable AWS::EC2::Route AWS::EC2::SubnetRouteTableAssociation AWS::EC2::NatGateway AWS::EC2::EIP You must have a public internet gateway, with public routes, attached to the VPC. In the provided templates, each public subnet has a NAT gateway with an EIP address. These NAT gateways allow cluster resources, like private subnet instances, to reach the internet and are not required for some restricted network or proxy scenarios. Network access control AWS::EC2::NetworkAcl AWS::EC2::NetworkAclEntry You must allow the VPC to access the following ports: Port Reason 80 Inbound HTTP traffic 443 Inbound HTTPS traffic 22 Inbound SSH traffic 1024 - 65535 Inbound ephemeral traffic 0 - 65535 Outbound ephemeral traffic Private subnets AWS::EC2::Subnet AWS::EC2::RouteTable AWS::EC2::SubnetRouteTableAssociation Your VPC can have private subnets. The provided CloudFormation templates can create private subnets for between 1 and 3 availability zones. If you use private subnets, you must provide appropriate routes and tables for them. 10.5.2. VPC validation To ensure that the subnets that you provide are suitable, the installation program confirms the following data: All the subnets that you specify exist. You provide private subnets. The subnet CIDRs belong to the machine CIDR that you specified. You provide subnets for each availability zone. Each availability zone contains no more than one public and one private subnet. If you use a private cluster, provide only a private subnet for each availability zone. Otherwise, provide exactly one public and private subnet for each availability zone. You provide a public subnet for each private subnet availability zone. Machines are not provisioned in availability zones that you do not provide private subnets for. If you destroy a cluster that uses an existing VPC, the VPC is not deleted. When you remove the OpenShift Container Platform cluster from a VPC, the kubernetes.io/cluster/.*: shared tag is removed from the subnets that it used. 10.5.3. Division of permissions Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resource in your clouds than others. For example, you might be able to create application-specific items, like instances, buckets, and load balancers, but not networking-related components such as VPCs, subnets, or ingress rules. The AWS credentials that you use when you create your cluster do not need the networking permissions that are required to make VPCs and core networking components within the VPC, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as ELBs, security groups, S3 buckets, and nodes. 10.5.4. Isolation between clusters If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways: You can install multiple OpenShift Container Platform clusters in the same VPC. ICMP ingress is allowed from the entire network. TCP 22 ingress (SSH) is allowed to the entire network. Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network. Control plane TCP 22623 ingress (MCS) is allowed to the entire network. 10.5.5. AWS security groups By default, the installation program creates and attaches security groups to control plane and compute machines. The rules associated with the default security groups cannot be modified. However, you can apply additional existing AWS security groups, which are associated with your existing VPC, to control plane and compute machines. Applying custom security groups can help you meet the security needs of your organization, in such cases where you need to control the incoming or outgoing traffic of these machines. As part of the installation process, you apply custom security groups by modifying the install-config.yaml file before deploying the cluster. For more information, see "Applying existing AWS security groups to the cluster". 10.6. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.14, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 10.7. Uploading a custom RHCOS AMI in AWS If you are deploying to a custom Amazon Web Services (AWS) region, you must upload a custom Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI) that belongs to that region. Prerequisites You configured an AWS account. You created an Amazon S3 bucket with the required IAM service role . You uploaded your RHCOS VMDK file to Amazon S3. The RHCOS VMDK file must be the highest version that is less than or equal to the OpenShift Container Platform version you are installing. You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer . Procedure Export your AWS profile as an environment variable: USD export AWS_PROFILE=<aws_profile> 1 Export the region to associate with your custom AMI as an environment variable: USD export AWS_DEFAULT_REGION=<aws_region> 1 Export the version of RHCOS you uploaded to Amazon S3 as an environment variable: USD export RHCOS_VERSION=<version> 1 1 1 1 The RHCOS VMDK version, like 4.14.0 . Export the Amazon S3 bucket name as an environment variable: USD export VMIMPORT_BUCKET_NAME=<s3_bucket_name> Create the containers.json file and define your RHCOS VMDK file: USD cat <<EOF > containers.json { "Description": "rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64", "Format": "vmdk", "UserBucket": { "S3Bucket": "USD{VMIMPORT_BUCKET_NAME}", "S3Key": "rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64.vmdk" } } EOF Import the RHCOS disk as an Amazon EBS snapshot: USD aws ec2 import-snapshot --region USD{AWS_DEFAULT_REGION} \ --description "<description>" \ 1 --disk-container "file://<file_path>/containers.json" 2 1 The description of your RHCOS disk being imported, like rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64 . 2 The file path to the JSON file describing your RHCOS disk. The JSON file should contain your Amazon S3 bucket name and key. Check the status of the image import: USD watch -n 5 aws ec2 describe-import-snapshot-tasks --region USD{AWS_DEFAULT_REGION} Example output { "ImportSnapshotTasks": [ { "Description": "rhcos-4.7.0-x86_64-aws.x86_64", "ImportTaskId": "import-snap-fh6i8uil", "SnapshotTaskDetail": { "Description": "rhcos-4.7.0-x86_64-aws.x86_64", "DiskImageSize": 819056640.0, "Format": "VMDK", "SnapshotId": "snap-06331325870076318", "Status": "completed", "UserBucket": { "S3Bucket": "external-images", "S3Key": "rhcos-4.7.0-x86_64-aws.x86_64.vmdk" } } } ] } Copy the SnapshotId to register the image. Create a custom RHCOS AMI from the RHCOS snapshot: USD aws ec2 register-image \ --region USD{AWS_DEFAULT_REGION} \ --architecture x86_64 \ 1 --description "rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64" \ 2 --ena-support \ --name "rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64" \ 3 --virtualization-type hvm \ --root-device-name '/dev/xvda' \ --block-device-mappings 'DeviceName=/dev/xvda,Ebs={DeleteOnTermination=true,SnapshotId=<snapshot_ID>}' 4 1 The RHCOS VMDK architecture type, like x86_64 , aarch64 , s390x , or ppc64le . 2 The Description from the imported snapshot. 3 The name of the RHCOS AMI. 4 The SnapshotID from the imported snapshot. To learn more about these APIs, see the AWS documentation for importing snapshots and creating EBS-backed AMIs . 10.8. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 10.9. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 10.10. Manually creating the installation configuration file Installing the cluster requires that you manually generate the installation configuration file. Prerequisites You have uploaded a custom RHCOS AMI. You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for AWS 10.10.1. Tested instance types for AWS The following Amazon Web Services (AWS) instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in "Minimum resource requirements for cluster installation". Example 10.1. Machine types based on 64-bit x86 architecture for secret regions c4.* c5.* i3.* m4.* m5.* r4.* r5.* t3.* 10.10.2. Sample customized install-config.yaml file for AWS You can customize the installation configuration file ( install-config.yaml ) to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. Use it as a resource to enter parameter values into the installation configuration file that you created manually. apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-iso-east-1a - us-iso-east-1b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-iso-east-1a - us-iso-east-1b replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-iso-east-1 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-96c6f8f7 17 18 serviceEndpoints: 19 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com hostedZone: Z3URY6TWQ91KVV 20 fips: false 21 sshKey: ssh-ed25519 AAAA... 22 publish: Internal 23 pullSecret: '{"auths": ...}' 24 additionalTrustBundle: | 25 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- 1 12 14 17 24 Required. 2 Optional: Add this parameter to force the Cloud Credential Operator (CCO) to use the specified mode. By default, the CCO uses the root credentials in the kube-system namespace to dynamically try to determine the capabilities of the credentials. For details about CCO modes, see the "About the Cloud Credential Operator" section in the Authentication and authorization guide. 3 8 15 If you do not provide these parameters and values, the installation program provides the default value. 4 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 5 9 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger instance types, such as m4.2xlarge or m5.2xlarge , for your machines if you disable simultaneous multithreading. 6 10 To configure faster storage for etcd, especially for larger clusters, set the storage type as io1 and set iops to 2000 . 7 11 Whether to require the Amazon EC2 Instance Metadata Service v2 (IMDSv2). To require IMDSv2, set the parameter value to Required . To allow the use of both IMDSv1 and IMDSv2, set the parameter value to Optional . If no value is specified, both IMDSv1 and IMDSv2 are allowed. Note The IMDS configuration for control plane machines that is set during cluster installation can only be changed by using the AWS CLI. The IMDS configuration for compute machines can be changed by using compute machine sets. 13 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 16 If you provide your own VPC, specify subnets for each availability zone that your cluster uses. 18 The ID of the AMI used to boot machines for the cluster. If set, the AMI must belong to the same region as the cluster. 19 The AWS service endpoints. Custom endpoints are required when installing to an unknown AWS region. The endpoint URL must use the https protocol and the host must trust the certificate. 20 The ID of your existing Route 53 private hosted zone. Providing an existing hosted zone requires that you supply your own VPC and the hosted zone is already associated with the VPC prior to installing your cluster. If undefined, the installation program creates a new hosted zone. 21 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 22 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 23 How to publish the user-facing endpoints of your cluster. Set publish to Internal to deploy a private cluster, which cannot be accessed from the internet. The default value is External . 25 The custom CA certificate. This is required when deploying to the SC2S or C2S Regions because the AWS API requires a custom CA trust bundle. 10.10.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. If you have added the Amazon EC2 , Elastic Load Balancing , and S3 VPC endpoints to your VPC, you must add these endpoints to the noProxy field. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 10.10.4. Applying existing AWS security groups to the cluster Applying existing AWS security groups to your control plane and compute machines can help you meet the security needs of your organization, in such cases where you need to control the incoming or outgoing traffic of these machines. Prerequisites You have created the security groups in AWS. For more information, see the AWS documentation about working with security groups . The security groups must be associated with the existing VPC that you are deploying the cluster to. The security groups cannot be associated with another VPC. You have an existing install-config.yaml file. Procedure In the install-config.yaml file, edit the compute.platform.aws.additionalSecurityGroupIDs parameter to specify one or more custom security groups for your compute machines. Edit the controlPlane.platform.aws.additionalSecurityGroupIDs parameter to specify one or more custom security groups for your control plane machines. Save the file and reference it when deploying the cluster. Sample install-config.yaml file that specifies custom security groups # ... compute: - hyperthreading: Enabled name: worker platform: aws: additionalSecurityGroupIDs: - sg-1 1 - sg-2 replicas: 3 controlPlane: hyperthreading: Enabled name: master platform: aws: additionalSecurityGroupIDs: - sg-3 - sg-4 replicas: 3 platform: aws: region: us-east-1 subnets: 2 - subnet-1 - subnet-2 - subnet-3 1 Specify the name of the security group as it appears in the Amazon EC2 console, including the sg prefix. 2 Specify subnets for each availability zone that your cluster uses. 10.11. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.14. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.14 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 10.12. Alternatives to storing administrator-level secrets in the kube-system project By default, administrator secrets are stored in the kube-system project. If you configured the credentialsMode parameter in the install-config.yaml file to Manual , you must use one of the following alternatives: To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials . To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Configuring an AWS cluster to use short-term credentials . 10.12.1. Manually creating long-term credentials The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: "*" ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: "*" ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. 10.12.2. Configuring an AWS cluster to use short-term credentials To install a cluster that is configured to use the AWS Security Token Service (STS), you must configure the CCO utility and create the required AWS resources for your cluster. 10.12.2.1. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). You have created an AWS account for the ccoctl utility to use with the following permissions: Example 10.2. Required AWS permissions Required iam permissions iam:CreateOpenIDConnectProvider iam:CreateRole iam:DeleteOpenIDConnectProvider iam:DeleteRole iam:DeleteRolePolicy iam:GetOpenIDConnectProvider iam:GetRole iam:GetUser iam:ListOpenIDConnectProviders iam:ListRolePolicies iam:ListRoles iam:PutRolePolicy iam:TagOpenIDConnectProvider iam:TagRole Required s3 permissions s3:CreateBucket s3:DeleteBucket s3:DeleteObject s3:GetBucketAcl s3:GetBucketTagging s3:GetObject s3:GetObjectAcl s3:GetObjectTagging s3:ListBucket s3:PutBucketAcl s3:PutBucketPolicy s3:PutBucketPublicAccessBlock s3:PutBucketTagging s3:PutObject s3:PutObjectAcl s3:PutObjectTagging Required cloudfront permissions cloudfront:ListCloudFrontOriginAccessIdentities cloudfront:ListDistributions cloudfront:ListTagsForResource If you plan to store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL, the AWS account that runs the ccoctl utility requires the following additional permissions: Example 10.3. Additional permissions for a private S3 bucket with CloudFront cloudfront:CreateCloudFrontOriginAccessIdentity cloudfront:CreateDistribution cloudfront:DeleteCloudFrontOriginAccessIdentity cloudfront:DeleteDistribution cloudfront:GetCloudFrontOriginAccessIdentity cloudfront:GetCloudFrontOriginAccessIdentityConfig cloudfront:GetDistribution cloudfront:TagResource cloudfront:UpdateDistribution Note These additional permissions support the use of the --create-private-s3-bucket option when processing credentials requests with the ccoctl aws create-all command. Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE --file="/usr/bin/ccoctl" -a ~/.pull-secret Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 10.12.2.2. Creating AWS resources with the Cloud Credential Operator utility You have the following options when creating AWS resources: You can use the ccoctl aws create-all command to create the AWS resources automatically. This is the quickest way to create the resources. See Creating AWS resources with a single command . If you need to review the JSON files that the ccoctl tool creates before modifying AWS resources, or if the process the ccoctl tool uses to create AWS resources automatically does not meet the requirements of your organization, you can create the AWS resources individually. See Creating AWS resources individually . 10.12.2.2.1. Creating AWS resources with a single command If the process the ccoctl tool uses to create AWS resources automatically meets the requirements of your organization, you can use the ccoctl aws create-all command to automate the creation of AWS resources. Otherwise, you can create the AWS resources individually. For more information, see "Creating AWS resources individually". Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Note This command might take a few moments to run. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl aws create-all \ --name=<name> \ 1 --region=<aws_region> \ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \ 3 --output-dir=<path_to_ccoctl_output_dir> \ 4 --create-private-s3-bucket 5 1 Specify the name used to tag any cloud resources that are created for tracking. 2 Specify the AWS region in which cloud resources will be created. 3 Specify the directory containing the files for the component CredentialsRequest objects. 4 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 5 Optional: By default, the ccoctl utility stores the OpenID Connect (OIDC) configuration files in a public S3 bucket and uses the S3 URL as the public OIDC endpoint. To store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL instead, use the --create-private-s3-bucket parameter. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml You can verify that the IAM roles are created by querying AWS. For more information, refer to AWS documentation on listing IAM roles. 10.12.2.2.2. Creating AWS resources individually You can use the ccoctl tool to create AWS resources individually. This option might be useful for an organization that shares the responsibility for creating these resources among different users or departments. Otherwise, you can use the ccoctl aws create-all command to create the AWS resources automatically. For more information, see "Creating AWS resources with a single command". Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Some ccoctl commands make AWS API calls to create or modify AWS resources. You can use the --dry-run flag to avoid making API calls. Using this flag creates JSON files on the local file system instead. You can review and modify the JSON files and then apply them with the AWS CLI tool using the --cli-input-json parameters. Prerequisites Extract and prepare the ccoctl binary. Procedure Generate the public and private RSA key files that are used to set up the OpenID Connect provider for the cluster by running the following command: USD ccoctl aws create-key-pair Example output 2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer where serviceaccount-signer.private and serviceaccount-signer.public are the generated key files. This command also creates a private key that the cluster requires during installation in /<path_to_ccoctl_output_dir>/tls/bound-service-account-signing-key.key . Create an OpenID Connect identity provider and S3 bucket on AWS by running the following command: USD ccoctl aws create-identity-provider \ --name=<name> \ 1 --region=<aws_region> \ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3 1 <name> is the name used to tag any cloud resources that are created for tracking. 2 <aws-region> is the AWS region in which cloud resources will be created. 3 <path_to_ccoctl_output_dir> is the path to the public key file that the ccoctl aws create-key-pair command generated. Example output 2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com where openid-configuration is a discovery document and keys.json is a JSON web key set file. This command also creates a YAML configuration file in /<path_to_ccoctl_output_dir>/manifests/cluster-authentication-02-config.yaml . This file sets the issuer URL field for the service account tokens that the cluster generates, so that the AWS IAM identity provider trusts the tokens. Create IAM roles for each component in the cluster: Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl aws create-iam-roles \ --name=<name> \ --region=<aws_region> \ --credentials-requests-dir=<path_to_credentials_requests_directory> \ --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com Note For AWS environments that use alternative IAM API endpoints, such as GovCloud, you must also specify your region with the --region parameter. If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. For each CredentialsRequest object, ccoctl creates an IAM role with a trust policy that is tied to the specified OIDC identity provider, and a permissions policy as defined in each CredentialsRequest object from the OpenShift Container Platform release image. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml You can verify that the IAM roles are created by querying AWS. For more information, refer to AWS documentation on listing IAM roles. 10.12.2.3. Incorporating the Cloud Credential Operator utility manifests To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility ( ccoctl ) created to the correct directories for the installation program. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have configured the Cloud Credential Operator utility ( ccoctl ). You have created the cloud provider resources that are required for your cluster with the ccoctl utility. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Copy the manifests that the ccoctl utility generated to the manifests directory that the installation program created by running the following command: USD cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ Copy the tls directory that contains the private key to the installation directory: USD cp -a /<path_to_ccoctl_output_dir>/tls . 10.13. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Optional: Remove or disable the AdministratorAccess policy from the IAM account that you used to install the cluster. Note The elevated permissions provided by the AdministratorAccess policy are required only during installation. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 10.14. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin /validating-an-installation.adoc 10.15. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. Additional resources Accessing the web console 10.16. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.14, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources About remote health monitoring 10.17. steps Validating an installation . Customize your cluster . If necessary, you can opt out of remote health reporting . If necessary, you can remove cloud provider credentials .
[ "export AWS_PROFILE=<aws_profile> 1", "export AWS_DEFAULT_REGION=<aws_region> 1", "export RHCOS_VERSION=<version> 1", "export VMIMPORT_BUCKET_NAME=<s3_bucket_name>", "cat <<EOF > containers.json { \"Description\": \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\", \"Format\": \"vmdk\", \"UserBucket\": { \"S3Bucket\": \"USD{VMIMPORT_BUCKET_NAME}\", \"S3Key\": \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64.vmdk\" } } EOF", "aws ec2 import-snapshot --region USD{AWS_DEFAULT_REGION} --description \"<description>\" \\ 1 --disk-container \"file://<file_path>/containers.json\" 2", "watch -n 5 aws ec2 describe-import-snapshot-tasks --region USD{AWS_DEFAULT_REGION}", "{ \"ImportSnapshotTasks\": [ { \"Description\": \"rhcos-4.7.0-x86_64-aws.x86_64\", \"ImportTaskId\": \"import-snap-fh6i8uil\", \"SnapshotTaskDetail\": { \"Description\": \"rhcos-4.7.0-x86_64-aws.x86_64\", \"DiskImageSize\": 819056640.0, \"Format\": \"VMDK\", \"SnapshotId\": \"snap-06331325870076318\", \"Status\": \"completed\", \"UserBucket\": { \"S3Bucket\": \"external-images\", \"S3Key\": \"rhcos-4.7.0-x86_64-aws.x86_64.vmdk\" } } } ] }", "aws ec2 register-image --region USD{AWS_DEFAULT_REGION} --architecture x86_64 \\ 1 --description \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\" \\ 2 --ena-support --name \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\" \\ 3 --virtualization-type hvm --root-device-name '/dev/xvda' --block-device-mappings 'DeviceName=/dev/xvda,Ebs={DeleteOnTermination=true,SnapshotId=<snapshot_ID>}' 4", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "mkdir <installation_directory>", "apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-iso-east-1a - us-iso-east-1b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-iso-east-1a - us-iso-east-1b replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-iso-east-1 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-96c6f8f7 17 18 serviceEndpoints: 19 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com hostedZone: Z3URY6TWQ91KVV 20 fips: false 21 sshKey: ssh-ed25519 AAAA... 22 publish: Internal 23 pullSecret: '{\"auths\": ...}' 24 additionalTrustBundle: | 25 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "compute: - hyperthreading: Enabled name: worker platform: aws: additionalSecurityGroupIDs: - sg-1 1 - sg-2 replicas: 3 controlPlane: hyperthreading: Enabled name: master platform: aws: additionalSecurityGroupIDs: - sg-3 - sg-4 replicas: 3 platform: aws: region: us-east-1 subnets: 2 - subnet-1 - subnet-2 - subnet-3", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: \"*\"", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: \"*\" secretRef: name: <component_secret> namespace: <component_namespace>", "apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret", "chmod 775 ccoctl", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "ccoctl aws create-all --name=<name> \\ 1 --region=<aws_region> \\ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 3 --output-dir=<path_to_ccoctl_output_dir> \\ 4 --create-private-s3-bucket 5", "ls <path_to_ccoctl_output_dir>/manifests", "cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml", "ccoctl aws create-key-pair", "2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer", "ccoctl aws create-identity-provider --name=<name> \\ 1 --region=<aws_region> \\ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3", "2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "ccoctl aws create-iam-roles --name=<name> --region=<aws_region> --credentials-requests-dir=<path_to_credentials_requests_directory> --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com", "ls <path_to_ccoctl_output_dir>/manifests", "cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/", "cp -a /<path_to_ccoctl_output_dir>/tls .", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "cat <installation_directory>/auth/kubeadmin-password", "oc get routes -n openshift-console | grep 'console-openshift'", "console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_on_aws/installing-aws-secret-region
13.2. Using SR-IOV
13.2. Using SR-IOV This section covers the use of PCI passthrough to assign a Virtual Function of an SR-IOV capable multiport network card to a virtual machine as a network device. SR-IOV Virtual Functions (VFs) can be assigned to virtual machines by adding a device entry in <hostdev> with the virsh edit or virsh attach-device command. However, this can be problematic because unlike a regular network device, an SR-IOV VF network device does not have a permanent unique MAC address, and is assigned a new MAC address each time the host is rebooted. Because of this, even if the guest is assigned the same VF after a reboot, when the host is rebooted the guest determines its network adapter to have a new MAC address. As a result, the guest believes there is new hardware connected each time, and will usually require re-configuration of the guest's network settings. libvirt-0.9.10 and later contains the <interface type='hostdev'> interface device. Using this interface device, libvirt will first perform any network-specific hardware/switch initialization indicated (such as setting the MAC address, VLAN tag, or 802.1Qbh virtualport parameters), then perform the PCI device assignment to the guest. Using the <interface type='hostdev'> interface device requires: an SR-IOV-capable network card, host hardware that supports either the Intel VT-d or the AMD IOMMU extensions, and the PCI address of the VF to be assigned. For a list of network interface cards (NICs) with SR-IOV support, see https://access.redhat.com/articles/1390483 . Important Assignment of an SR-IOV device to a virtual machine requires that the host hardware supports the Intel VT-d or the AMD IOMMU specification. To attach an SR-IOV network device on an Intel or an AMD system, follow this procedure: Procedure 13.1. Attach an SR-IOV network device on an Intel or AMD system Enable Intel VT-d or the AMD IOMMU specifications in the BIOS and kernel On an Intel system, enable Intel VT-d in the BIOS if it is not enabled already. Refer to Procedure 12.1, "Preparing an Intel system for PCI device assignment" for procedural help on enabling Intel VT-d in the BIOS and kernel. Skip this step if Intel VT-d is already enabled and working. On an AMD system, enable the AMD IOMMU specifications in the BIOS if they are not enabled already. Refer to Procedure 12.2, "Preparing an AMD system for PCI device assignment" for procedural help on enabling IOMMU in the BIOS. Verify support Verify if the PCI device with SR-IOV capabilities is detected. This example lists an Intel 82576 network interface card which supports SR-IOV. Use the lspci command to verify whether the device was detected. Note that the output has been modified to remove all other devices. Start the SR-IOV kernel modules If the device is supported the driver kernel module should be loaded automatically by the kernel. Optional parameters can be passed to the module using the modprobe command. The Intel 82576 network interface card uses the igb driver kernel module. Activate Virtual Functions The max_vfs parameter of the igb module allocates the maximum number of Virtual Functions. The max_vfs parameter causes the driver to spawn, up to the value of the parameter in, Virtual Functions. For this particular card the valid range is 0 to 7 . Remove the module to change the variable. Restart the module with the max_vfs set to 7 or any number of Virtual Functions up to the maximum supported by your device. Make the Virtual Functions persistent Add the line options igb max_vfs=7 to any file in /etc/modprobe.d to make the Virtual Functions persistent. For example: Inspect the new Virtual Functions Using the lspci command, list the newly added Virtual Functions attached to the Intel 82576 network device. (Alternatively, use grep to search for Virtual Function , to search for devices that support Virtual Functions.) The identifier for the PCI device is found with the -n parameter of the lspci command. The Physical Functions correspond to 0b:00.0 and 0b:00.1 . All Virtual Functions have Virtual Function in the description. Verify devices exist with virsh The libvirt service must recognize the device before adding a device to a virtual machine. libvirt uses a similar notation to the lspci output. All punctuation characters, ; and . , in lspci output are changed to underscores ( _ ). Use the virsh nodedev-list command and the grep command to filter the Intel 82576 network device from the list of available host devices. 0b is the filter for the Intel 82576 network devices in this example. This may vary for your system and may result in additional devices. The serial numbers for the Virtual Functions and Physical Functions should be in the list. Get device details with virsh The pci_0000_0b_00_0 is one of the Physical Functions and pci_0000_0b_10_0 is the first corresponding Virtual Function for that Physical Function. Use the virsh nodedev-dumpxml command to get advanced output for both devices. This example adds the Virtual Function pci_0000_0b_10_0 to the virtual machine in Step 9 . Note the bus , slot and function parameters of the Virtual Function: these are required for adding the device. Copy these parameters into a temporary XML file, such as /tmp/new-interface.xml for example. <interface type='hostdev' managed='yes'> <source> <address type='pci' domain='0' bus='11' slot='16' function='0'/> </source> </interface> Note If you do not specify a MAC address, one will be automatically generated. The <virtualport> element is only used when connecting to an 802.11Qbh hardware switch. The <vlan> element is new for Red Hat Enterprise Linux 6.4 and this will transparently put the guest's device on the VLAN tagged 42 . When the virtual machine starts, it should see a network device of the type provided by the physical adapter, with the configured MAC address. This MAC address will remain unchanged across host and guest reboots. The following <interface> example shows the syntax for the optional <mac address> , <virtualport> , and <vlan> elements. In practice, use either the <vlan> or <virtualport> element, not both simultaneously as shown in the example: ... <devices> ... <interface type='hostdev' managed='yes'> <source> <address type='pci' domain='0' bus='11' slot='16' function='0'/> </source> <mac address='52:54:00:6d:90:02'> <vlan> <tag id='42'/> </vlan> <virtualport type='802.1Qbh'> <parameters profileid='finance'/> </virtualport> </interface> ... </devices> Add the Virtual Function to the virtual machine Add the Virtual Function to the virtual machine using the following command with the temporary file created in the step. This attaches the new device immediately and saves it for subsequent guest restarts. Using the --config option ensures the new device is available after future guest restarts. The virtual machine detects a new network interface card. This new card is the Virtual Function of the SR-IOV device.
[ "lspci 03:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01) 03:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)", "modprobe igb [<option>=<VAL1>,<VAL2>,] lsmod |grep igb igb 87592 0 dca 6708 1 igb", "modprobe -r igb", "modprobe igb max_vfs=7", "echo \"options igb max_vfs=7\" >>/etc/modprobe.d/igb.conf", "lspci | grep 82576 0b:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01) 0b:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection(rev 01) 0b:10.0 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01) 0b:10.1 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01) 0b:10.2 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01) 0b:10.3 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01) 0b:10.4 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01) 0b:10.5 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01) 0b:10.6 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01) 0b:10.7 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01) 0b:11.0 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01) 0b:11.1 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01) 0b:11.2 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01) 0b:11.3 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01) 0b:11.4 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01) 0b:11.5 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)", "virsh nodedev-list | grep 0b pci_0000_0b_00_0 pci_0000_0b_00_1 pci_0000_0b_10_0 pci_0000_0b_10_1 pci_0000_0b_10_2 pci_0000_0b_10_3 pci_0000_0b_10_4 pci_0000_0b_10_5 pci_0000_0b_10_6 pci_0000_0b_11_7 pci_0000_0b_11_1 pci_0000_0b_11_2 pci_0000_0b_11_3 pci_0000_0b_11_4 pci_0000_0b_11_5", "virsh nodedev-dumpxml pci_0000_0b_00_0 <device> <name>pci_0000_0b_00_0</name> <parent>pci_0000_00_01_0</parent> <driver> <name>igb</name> </driver> <capability type='pci'> <domain>0</domain> <bus>11</bus> <slot>0</slot> <function>0</function> <product id='0x10c9'>82576 Gigabit Network Connection</product> <vendor id='0x8086'>Intel Corporation</vendor> </capability> </device>", "virsh nodedev-dumpxml pci_0000_0b_10_0 <device> <name>pci_0000_0b_10_0</name> <parent>pci_0000_00_01_0</parent> <driver> <name>igbvf</name> </driver> <capability type='pci'> <domain>0</domain> <bus>11</bus> <slot>16</slot> <function>0</function> <product id='0x10ca'>82576 Virtual Function</product> <vendor id='0x8086'>Intel Corporation</vendor> </capability> </device>", "<interface type='hostdev' managed='yes'> <source> <address type='pci' domain='0' bus='11' slot='16' function='0'/> </source> </interface>", "<devices> <interface type='hostdev' managed='yes'> <source> <address type='pci' domain='0' bus='11' slot='16' function='0'/> </source> <mac address='52:54:00:6d:90:02'> <vlan> <tag id='42'/> </vlan> <virtualport type='802.1Qbh'> <parameters profileid='finance'/> </virtualport> </interface> </devices>", "virsh attach-device MyGuest /tmp/new-interface.xml --config" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_host_configuration_and_guest_installation_guide/sect-virtualization_host_configuration_and_guest_installation_guide-sr_iov-how_sr_iov_libvirt_works
Chapter 18. Authenticating Business Central through RH-SSO
Chapter 18. Authenticating Business Central through RH-SSO This chapter describes how to authenticate Business Central through RH-SSO. It includes the following sections: Section 18.1, "Creating the Business Central client for RH-SSO" Section 18.2, "Installing the RH-SSO client adapter for Business Central" Section 18.3, "Enabling access to external file systems and Git repository services for Business Central using RH-SSO" Prerequisites Business Central is installed in a Red Hat JBoss EAP 7.4 server, as described in Installing and configuring Red Hat Decision Manager on Red Hat JBoss EAP 7.4 . RH-SSO is installed as described in Chapter 16, Installing and configuring RH-SSO . You added Business Central users to RH-SSO as described in Section 17.1, "Adding Red Hat Decision Manager users" . Optional: To manage RH-SSO users from Business Central, you added all realm-management client roles in RH-SSO to the Business Central administrator user. Note Except for Section 18.1, "Creating the Business Central client for RH-SSO" , this section is intended for standalone installations. If you are integrating RH-SSO and Red Hat Decision Manager on Red Hat OpenShift Container Platform, complete only the steps in Section 18.1, "Creating the Business Central client for RH-SSO" and then deploy the Red Hat Decision Manager environment on Red Hat OpenShift Container Platform. For information about deploying Red Hat Decision Manager on Red Hat OpenShift Container Platform, see Deploying Red Hat Decision Manager on Red Hat OpenShift Container Platform . 18.1. Creating the Business Central client for RH-SSO After the RH-SSO server starts, use the RH-SSO Admin Console to create the Business Central client for RH-SSO. Procedure Enter http://localhost:8180/auth/admin in a web browser to open the RH-SSO Admin Console and log in using the admin credentials that you created while installing RH-SSO. Note If you are configuring RH-SSO with Red Hat OpenShift Container Platform, enter the URL that is exposed by the RH-SSO routes. Your OpenShift administrator can provide this URL if necessary. When you login for the first time, you can set up the initial user on the new user registration form. In the RH-SSO Admin Console, click the Realm Settings menu item. On the Realm Settings page, click Add Realm . The Add realm page opens. On the Add realm page, provide a name for the realm and click Create . Click the Clients menu item and click Create . The Add Client page opens. On the Add Client page, provide the required information to create a new client for your realm. For example: Client ID : kie Client protocol : openid-connect Root URL : http:// localhost :8080/business-central Note If you are configuring RH-SSO with Red Hat OpenShift Container Platform, enter the URL that is exposed by the KIE Server routes. Your OpenShift administrator can provide this URL if necessary. Click Save to save your changes. After you create a new client, its Access Type is set to public by default. Change it to confidential . The RH-SSO server is now configured with a realm with a client for Business Central applications and running and listening for HTTP connections at localhost:8180 . This realm provides different users, roles, and sessions for Business Central applications. Note The RH-SSO server client uses one URL to a single business-central deployment. The following error message might be displayed if there are two or more deployment configurations: We are sorry... Invalid parameter: redirect_uri To resolve this error, append /* to the Valid Redirect URIs field in the client configuration. On the Configure page, go to Clients > kie > Settings , and append the Valid Redirect URIs field with /* , for example: 18.2. Installing the RH-SSO client adapter for Business Central After you install RH-SSO, you must install the RH-SSO client adapter for Red Hat JBoss EAP and configure it for Business Central. Prerequisites Business Central is installed in a Red Hat JBoss EAP 7.4 instance, as described in Installing and configuring Red Hat Decision Manager on Red Hat JBoss EAP 7.4 . RH-SSO is installed as described in Chapter 16, Installing and configuring RH-SSO . A user with the admin role has been added to RH-SSO as described in Section 17.1, "Adding Red Hat Decision Manager users" . Procedure Navigate to the Software Downloads page in the Red Hat Customer Portal (login required) and then select the product and version from the drop-down options: Product: Red Hat Single Sign-On Version: 7.5 Select the Patches tab. Download Red Hat Single Sign-On 7.5 Client Adapter for EAP 7 ( rh-sso-7.5.0-eap7-adapter.zip or the latest version). Extract and install the adapter zip file. For installation instructions, see the "JBoss EAP Adapter" section of the Red Hat Single Sign On Securing Applications and Services Guide . Note Install the adapter with the -Dserver.config=standalone-full.xml property. Navigate to the EAP_HOME /standalone/configuration directory in your Red Hat JBoss EAP installation and open the standalone-full.xml file in a text editor. Add the system properties listed in the following example to <system-properties> : <system-properties> <property name="org.jbpm.workbench.kie_server.keycloak" value="true"/> <property name="org.uberfire.ext.security.management.api.userManagementServices" value="KCAdapterUserManagementService"/> <property name="org.uberfire.ext.security.management.keycloak.authServer" value="http://localhost:8180/auth"/> </system-properties> Optional: If you want to use client roles, add the following system property: <property name="org.uberfire.ext.security.management.keycloak.use-resource-role-mappings" value="true"/> By default, the client resource name is kie . The client resource name must be the same as the client name that you used to configure the client in RH-SSO. If you want to use a custom client resource name, add the following system property: <property name="org.uberfire.ext.security.management.keycloak.resource" value="customClient"/> Replace customClient with the client resource name. Add the RH-SSO subsystem configuration. For example: <subsystem xmlns="urn:jboss:domain:keycloak:1.1"> <secure-deployment name="business-central.war"> <realm>demo</realm> <realm-public-key>MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCrVrCuTtArbgaZzL1hvh0xtL5mc7o0NqPVnYXkLvgcwiC3BjLGw1tGEGoJaXDuSaRllobm53JBhjx33UNv+5z/UMG4kytBWxheNVKnL6GgqlNabMaFfPLPCF8kAgKnsi79NMo+n6KnSY8YeUmec/p2vjO2NjsSAVcWEQMVhJ31LwIDAQAB</realm-public-key> <auth-server-url>http://localhost:8180/auth</auth-server-url> <ssl-required>external</ssl-required> <enable-basic-auth>true</enable-basic-auth> <resource>kie</resource> <credential name="secret">759514d0-dbb1-46ba-b7e7-ff76e63c6891</credential> <principal-attribute>preferred_username</principal-attribute> </secure-deployment> </subsystem> In this example: secure-deployment name is the name of your application's WAR file. realm is the name of the realm that you created for the applications to use. realm-public-key is the public key of the realm you created. You can find the key in the Keys tab in the Realm settings page of the realm you created in the RH-SSO Admin Console. If you do not provide a value for realm-public-key , the server retrieves it automatically. auth-server-url is the URL for the RH-SSO authentication server. enable-basic-auth is the setting to enable basic authentication mechanism, so that the clients can use both token-based and basic authentication approaches to perform the requests. resource is the name for the client that you created. To use client roles, set the client resource name that you used when configuring the client in RH-SSO. credential name is the secret key for the client you created. You can find the key in the Credentials tab on the Clients page of the RH-SSO Admin Console. principal-attribute is the attribute for displaying the user name in the application. If you do not provide this value, your User Id is displayed in the application instead of your user name. Note The RH-SSO server converts the user names to lower case. Therefore, after integration with RH-SSO, your user name will appear in lower case in Red Hat Decision Manager. If you have user names in upper case hard coded in business processes, the application might not be able to identify the upper case user. If you want to use client roles, also add the following setting under <secure-deployment> : <use-resource-role-mappings>true</use-resource-role-mappings> The Elytron subsystem provides a built-in policy provider based on JACC specification. To enable the JACC manually in the standalone.xml or in the file where Elytron is installed, do any of the following tasks: To create the policy provider, enter the following commands in the management command-line interface (CLI) of Red Hat JBoss EAP: For more information about the Red Hat JBoss EAP management CLI, see the Management CLI Guide for Red Hat JBoss EAP. Navigate to the EAP_HOME /standalone/configuration directory in your Red Hat JBoss EAP installation. Locate the Elytron and undertow subsystem configurations in the standalone.xml and standalone-full.xml files and enable JACC. For example: <subsystem xmlns="urn:jboss:domain:undertow:12.0" ... > ... <application-security-domains> <application-security-domain name="other" http-authentication-factory="keycloak-http-authentication"/> </application-security-domains> <subsystem xmlns="urn:jboss:domain:ejb3:9.0"> ... <application-security-domains> <application-security-domain name="other" security-domain="KeycloakDomain"/> </application-security-domains> Navigate to EAP_HOME /bin/ and enter the following command to start the Red Hat JBoss EAP server: Note You can also configure the RH-SSO adapter for Business Central by updating your application's WAR file to use the RH-SSO security subsystem. However, Red Hat recommends that you configure the adapter through the RH-SSO subsystem. Doing this updates the Red Hat JBoss EAP configuration instead of applying the configuration on each WAR file. 18.3. Enabling access to external file systems and Git repository services for Business Central using RH-SSO To enable Business Central to consume other remote services, such as file systems and Git repositories, using RH-SSO authentication, you must create a configuration file. Procedure Generate a JSON configuration file: Navigate to the RH-SSO Admin Console located at http://localhost:8180/auth/admin. Click Clients . Create a new client with the following settings: Set Client ID as kie-git . Set Access Type as confidential . Disable the Standard Flow Enabled option. Enable the Direct Access Grants Enabled option. Click Save . Click the Installation tab at the top of the client configuration screen and choose Keycloak OIDC JSON as a Format Option . Click Download . Move the downloaded JSON file to an accessible directory in the server's file system or add it to the application class path. The default name and location for this file is USDEAP_HOME/kie-git.json . Optional: In the EAP_HOME /standalone/configuration/standalone-full.xml file, under the <system-properties> tag, add the following system property: <property name="org.uberfire.ext.security.keycloak.keycloak-config-file" value="USDEAP_HOME/kie-git.json"/> Replace the USD EAP_HOME /kie-git.json value of the property with the absolute path or the class path ( classpath:/ EXAMPLE_PATH /kie-git.json ) to the new JSON configuration file. Note If you do not set the org.uberfire.ext.security.keycloak.keycloak-config-file property, Red Hat Decision Manager reads the USDEAP_HOME/kie-git.json file. Result All users authenticated through the RH-SSO server can clone internal GIT repositories. In the following command, replace USER_NAME with a RH-SSO user, for example admin : + Note The RH-SSO server client uses one URL to a single remote service deployment. The following error message might be displayed if there are two or more deployment configurations: We are sorry... Invalid parameter: redirect_uri To resolve this error, append /* to the Valid Redirect URIs field in the client configuration. On the Configure page, go to Clients > kie-git > Settings , and append the Valid Redirect URIs field with /* , for example:
[ "http://localhost:8080/business-central/*", "<system-properties> <property name=\"org.jbpm.workbench.kie_server.keycloak\" value=\"true\"/> <property name=\"org.uberfire.ext.security.management.api.userManagementServices\" value=\"KCAdapterUserManagementService\"/> <property name=\"org.uberfire.ext.security.management.keycloak.authServer\" value=\"http://localhost:8180/auth\"/> </system-properties>", "<property name=\"org.uberfire.ext.security.management.keycloak.use-resource-role-mappings\" value=\"true\"/>", "<property name=\"org.uberfire.ext.security.management.keycloak.resource\" value=\"customClient\"/>", "<subsystem xmlns=\"urn:jboss:domain:keycloak:1.1\"> <secure-deployment name=\"business-central.war\"> <realm>demo</realm> <realm-public-key>MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCrVrCuTtArbgaZzL1hvh0xtL5mc7o0NqPVnYXkLvgcwiC3BjLGw1tGEGoJaXDuSaRllobm53JBhjx33UNv+5z/UMG4kytBWxheNVKnL6GgqlNabMaFfPLPCF8kAgKnsi79NMo+n6KnSY8YeUmec/p2vjO2NjsSAVcWEQMVhJ31LwIDAQAB</realm-public-key> <auth-server-url>http://localhost:8180/auth</auth-server-url> <ssl-required>external</ssl-required> <enable-basic-auth>true</enable-basic-auth> <resource>kie</resource> <credential name=\"secret\">759514d0-dbb1-46ba-b7e7-ff76e63c6891</credential> <principal-attribute>preferred_username</principal-attribute> </secure-deployment> </subsystem>", "<use-resource-role-mappings>true</use-resource-role-mappings>", "/subsystem=undertow/application-security-domain=other:remove() /subsystem=undertow/application-security-domain=other:add(http-authentication-factory=\"keycloak-http-authentication\") /subsystem=ejb3/application-security-domain=other:write-attribute(name=security-domain, value=KeycloakDomain)", "<subsystem xmlns=\"urn:jboss:domain:undertow:12.0\" ... > <application-security-domains> <application-security-domain name=\"other\" http-authentication-factory=\"keycloak-http-authentication\"/> </application-security-domains>", "<subsystem xmlns=\"urn:jboss:domain:ejb3:9.0\"> <application-security-domains> <application-security-domain name=\"other\" security-domain=\"KeycloakDomain\"/> </application-security-domains>", "./standalone.sh -c standalone-full.xml", "<property name=\"org.uberfire.ext.security.keycloak.keycloak-config-file\" value=\"USDEAP_HOME/kie-git.json\"/>", "git clone ssh://USER_NAME@localhost:8001/system", "http://localhost:8080/remote-system/*" ]
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/integrating_red_hat_decision_manager_with_other_products_and_components/sso-central-proc_integrate-sso
4.295. shadow-utils
4.295. shadow-utils 4.295.1. RHBA-2011:1650 - shadow-utils bug fix and enhancement update An updated shadow-utils package that fixes multiple bugs and adds three enhancements is now available for Red Hat Enterprise Linux 6. The shadow-utils package includes programs for converting UNIX password files to the shadow password format, as well as tools for managing user and group accounts. Bug Fixes BZ# 586796 Previously, the extended access control lists (ACL) on a file or directory below the /etc/skel directory were not preserved when a new user was created. As a result, the file or directory was copied but the extended ACLs that were associated with the file or directory were lost. This update preserves these extended ACLs. BZ# 667593 Previously,the switch-group (sg) command failed with a segmentation fault when using password protected groups. This update modifies the gshadow functions in shadow-utils and also uses the gshadow functions from glibc so that the sg command now handles password protected groups as expected. BZ# 672510 Previously, the new group (newgrp) command failed with a segmentation fault when using password protected groups. This update modifies the newgrp command so that the newgrp command now handles password protected groups as expected. BZ# 674878 , BZ# 696213 Previously, the man page for the useradd command contained misleading information about the -m option. The -m option is described correctly. BZ# 693377 Previously, the useradd command failed with a segmentation fault when the user ID (UID) range exceeded the maximum of 2147483647 (UID_MAX) accounts on a 64bit system. This update replaces the alloca() function with the malloc() function and checks the return value. Now, the useradd command operates in this range as expected. BZ# 706321 Previously, the lastlog command did not work correctly with large UIDs on 32bit system due to integer overflow. As a result, lastlog showed only users that were logged in. This update modifies the code so that lastlog now shows also users that were never logged in. Enhancements BZ# 723921 This update is compiled with the position-independent executable (PIE) and relocation read-only (RELRO) flags which enhance the security of the system. BZ# 639900 With this update, the userdel command offers the option to delete both from the SELinux login mapping. BZ# 629277 , BZ# 696213 This update adds additional comments in "/etc/login.defs". These comments inform the administrator that certain configuration options are ignored in favor of the pam-cracklib module. All users of shadow-utils are advised to upgrade to this updated package, which fixes these bugs and adds these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/shadow-utils
10.7. Adding Modules
10.7. Adding Modules The Apache HTTP Server supports Dynamically Shared Objects ( DSO s), or modules, which can easily be loaded at runtime as necessary. The Apache Project provides complete DSO documentation online at http://httpd.apache.org/docs-2.0/dso.html . Or, if the http-manual package is installed, documentation about DSOs can be found online at http://localhost/manual/mod/ . For the Apache HTTP Server to use a DSO, it must be specified in a LoadModule directive within /etc/httpd/conf/httpd.conf . If the module is provided by a separate package, the line must appear within the modules configuration file in the /etc/httpd/conf.d/ directory. Refer to Section 10.5.12, " LoadModule " for more information. If adding or deleting modules from http.conf , Apache HTTP Server must be reloaded or restarted, as referred to in Section 10.4, "Starting and Stopping httpd " . If creating a new module, first install the httpd-devel package which contains the include files, the header files, as well as the APache eXtenSion ( /usr/sbin/apxs ) application, which uses the include files and the header files to compile DSOs. After writing a module, use /usr/sbin/apxs to compile the module sources outside the Apache source tree. For more information about using the /usr/sbin/apxs command, refer to the the Apache documentation online at http://httpd.apache.org/docs-2.0/dso.html as well as the apxs man page. Once compiled, put the module in the /usr/lib/httpd/modules/ directory. Then add a LoadModule line to the httpd.conf , using the following structure: Where <module-name> is the name of the module and <path/to/module.so> is the path to the DSO.
[ "LoadModule <module-name> <path/to/module.so>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s1-apache-addmods
Chapter 6. Managing permissions
Chapter 6. Managing permissions A permission associates the object being protected and the policies that must be evaluated to decide whether access should be granted. After creating the resources you want to protect and the policies you want to use to protect these resources, you can start managing permissions. To manage permissions, click the Permissions tab when editing a resource server. Permissions Permissions can be created to protect two main types of objects: Resources Scopes To create a permission, select the permission type you want to create from the item list in the upper right corner of the permission listing. The following sections describe these two types of objects in more detail. 6.1. Creating resource-based permission A resource-based permission defines a set of one or more resources to protect using a set of one or more authorization policies. To create a new resource-based permission, select Create resource-based permission from the Create permission dropdown. Add Resource Permission 6.1.1. Configuration Name A human-readable and unique string describing the permission. A best practice is to use names that are closely related to your business and security requirements, so you can identify them more easily. Description A string containing details about this permission. Apply To Resource Type Specifies if the permission is applied to all resources with a given type. When selecting this field, you are prompted to enter the resource type to protect. Resource Type Defines the resource type to protect. When defined, this permission is evaluated for all resources matching that type. Resources Defines a set of one or more resources to protect. Policy Defines a set of one or more policies to associate with a permission. To associate a policy you can either select an existing policy or create a new one by selecting the type of the policy you want to create. Decision Strategy The Decision Strategy for this permission. 6.1.2. Typed resource permission Resource permissions can also be used to define policies that are to be applied to all resources with a given type . This form of resource-based permission can be useful when you have resources sharing common access requirements and constraints. Frequently, resources within an application can be categorized (or typed) based on the data they encapsulate or the functionality they provide. For example, a financial application can manage different banking accounts where each one belongs to a specific customer. Although they are different banking accounts, they share common security requirements and constraints that are globally defined by the banking organization. With typed resource permissions, you can define common policies to apply to all banking accounts, such as: Only the owner can manage his account Only allow access from the owner's country and/or region Enforce a specific authentication method To create a typed resource permission, click Apply to Resource Type when creating a new resource-based permission. With Apply to Resource Type set to On , you can specify the type that you want to protect as well as the policies that are to be applied to govern access to all resources with type you have specified. Example of a typed resource permission 6.2. Creating scope-based permissions A scope-based permission defines a set of one or more scopes to protect using a set of one or more authorization policies. Unlike resource-based permissions, you can use this permission type to create permissions not only for a resource, but also for the scopes associated with it, providing more granularity when defining the permissions that govern your resources and the actions that can be performed on them. To create a new scope-based permission, select Create scope-based permission from the Create permission dropdown. Add Scope Permission 6.2.1. Configuration Name A human-readable and unique string describing the permission. A best practice is to use names that are closely related to your business and security requirements, so you can identify them more easily. Description A string containing details about this permission. Resource Restricts the scopes to those associated with the selected resource. If none is selected, all scopes are available. Scopes Defines a set of one or more scopes to protect. Policy Defines a set of one or more policies to associate with a permission. To associate a policy you can either select an existing policy or create a new one by selecting the type of the policy you want to create. Decision Strategy The Decision Strategy for this permission. 6.3. Policy decision strategies When associating policies with a permission, you can also define a decision strategy to specify how to evaluate the outcome of the associated policies to determine access. Unanimous The default strategy if none is provided. In this case, all policies must evaluate to a positive decision for the final decision to be also positive. Affirmative In this case, at least one policy must evaluate to a positive decision for the final decision to be also positive. Consensus In this case, the number of positive decisions must be greater than the number of negative decisions. If the number of positive and negative decisions is equal, the final decision will be negative.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/22.0/html/authorization_services_guide/permission_overview
Chapter 4. InstallPlan [operators.coreos.com/v1alpha1]
Chapter 4. InstallPlan [operators.coreos.com/v1alpha1] Description InstallPlan defines the installation of a set of operators. Type object Required metadata spec 4.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object InstallPlanSpec defines a set of Application resources to be installed status object InstallPlanStatus represents the information about the status of steps required to complete installation. Status may trail the actual state of a system. 4.1.1. .spec Description InstallPlanSpec defines a set of Application resources to be installed Type object Required approval approved clusterServiceVersionNames Property Type Description approval string Approval is the user approval policy for an InstallPlan. It must be one of "Automatic" or "Manual". approved boolean clusterServiceVersionNames array (string) generation integer source string sourceNamespace string 4.1.2. .status Description InstallPlanStatus represents the information about the status of steps required to complete installation. Status may trail the actual state of a system. Type object Required catalogSources phase Property Type Description attenuatedServiceAccountRef object AttenuatedServiceAccountRef references the service account that is used to do scoped operator install. bundleLookups array BundleLookups is the set of in-progress requests to pull and unpackage bundle content to the cluster. bundleLookups[] object BundleLookup is a request to pull and unpackage the content of a bundle to the cluster. catalogSources array (string) conditions array conditions[] object InstallPlanCondition represents the overall status of the execution of an InstallPlan. message string Message is a human-readable message containing detailed information that may be important to understanding why the plan has its current status. phase string InstallPlanPhase is the current status of a InstallPlan as a whole. plan array plan[] object Step represents the status of an individual step in an InstallPlan. startTime string StartTime is the time when the controller began applying the resources listed in the plan to the cluster. 4.1.3. .status.attenuatedServiceAccountRef Description AttenuatedServiceAccountRef references the service account that is used to do scoped operator install. Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 4.1.4. .status.bundleLookups Description BundleLookups is the set of in-progress requests to pull and unpackage bundle content to the cluster. Type array 4.1.5. .status.bundleLookups[] Description BundleLookup is a request to pull and unpackage the content of a bundle to the cluster. Type object Required catalogSourceRef identifier path replaces Property Type Description catalogSourceRef object CatalogSourceRef is a reference to the CatalogSource the bundle path was resolved from. conditions array Conditions represents the overall state of a BundleLookup. conditions[] object identifier string Identifier is the catalog-unique name of the operator (the name of the CSV for bundles that contain CSVs) path string Path refers to the location of a bundle to pull. It's typically an image reference. properties string The effective properties of the unpacked bundle. replaces string Replaces is the name of the bundle to replace with the one found at Path. 4.1.6. .status.bundleLookups[].catalogSourceRef Description CatalogSourceRef is a reference to the CatalogSource the bundle path was resolved from. Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 4.1.7. .status.bundleLookups[].conditions Description Conditions represents the overall state of a BundleLookup. Type array 4.1.8. .status.bundleLookups[].conditions[] Description Type object Required status type Property Type Description lastTransitionTime string Last time the condition transitioned from one status to another. lastUpdateTime string Last time the condition was probed. message string A human readable message indicating details about the transition. reason string The reason for the condition's last transition. status string Status of the condition, one of True, False, Unknown. type string Type of condition. 4.1.9. .status.conditions Description Type array 4.1.10. .status.conditions[] Description InstallPlanCondition represents the overall status of the execution of an InstallPlan. Type object Property Type Description lastTransitionTime string lastUpdateTime string message string reason string ConditionReason is a camelcased reason for the state transition. status string type string InstallPlanConditionType describes the state of an InstallPlan at a certain point as a whole. 4.1.11. .status.plan Description Type array 4.1.12. .status.plan[] Description Step represents the status of an individual step in an InstallPlan. Type object Required resolving resource status Property Type Description optional boolean resolving string resource object StepResource represents the status of a resource to be tracked by an InstallPlan. status string StepStatus is the current status of a particular resource an in InstallPlan 4.1.13. .status.plan[].resource Description StepResource represents the status of a resource to be tracked by an InstallPlan. Type object Required group kind name sourceName sourceNamespace version Property Type Description group string kind string manifest string name string sourceName string sourceNamespace string version string 4.2. API endpoints The following API endpoints are available: /apis/operators.coreos.com/v1alpha1/installplans GET : list objects of kind InstallPlan /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/installplans DELETE : delete collection of InstallPlan GET : list objects of kind InstallPlan POST : create an InstallPlan /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/installplans/{name} DELETE : delete an InstallPlan GET : read the specified InstallPlan PATCH : partially update the specified InstallPlan PUT : replace the specified InstallPlan /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/installplans/{name}/status GET : read status of the specified InstallPlan PATCH : partially update status of the specified InstallPlan PUT : replace status of the specified InstallPlan 4.2.1. /apis/operators.coreos.com/v1alpha1/installplans HTTP method GET Description list objects of kind InstallPlan Table 4.1. HTTP responses HTTP code Reponse body 200 - OK InstallPlanList schema 401 - Unauthorized Empty 4.2.2. /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/installplans HTTP method DELETE Description delete collection of InstallPlan Table 4.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind InstallPlan Table 4.3. HTTP responses HTTP code Reponse body 200 - OK InstallPlanList schema 401 - Unauthorized Empty HTTP method POST Description create an InstallPlan Table 4.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.5. Body parameters Parameter Type Description body InstallPlan schema Table 4.6. HTTP responses HTTP code Reponse body 200 - OK InstallPlan schema 201 - Created InstallPlan schema 202 - Accepted InstallPlan schema 401 - Unauthorized Empty 4.2.3. /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/installplans/{name} Table 4.7. Global path parameters Parameter Type Description name string name of the InstallPlan HTTP method DELETE Description delete an InstallPlan Table 4.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 4.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified InstallPlan Table 4.10. HTTP responses HTTP code Reponse body 200 - OK InstallPlan schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified InstallPlan Table 4.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.12. HTTP responses HTTP code Reponse body 200 - OK InstallPlan schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified InstallPlan Table 4.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.14. Body parameters Parameter Type Description body InstallPlan schema Table 4.15. HTTP responses HTTP code Reponse body 200 - OK InstallPlan schema 201 - Created InstallPlan schema 401 - Unauthorized Empty 4.2.4. /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/installplans/{name}/status Table 4.16. Global path parameters Parameter Type Description name string name of the InstallPlan HTTP method GET Description read status of the specified InstallPlan Table 4.17. HTTP responses HTTP code Reponse body 200 - OK InstallPlan schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified InstallPlan Table 4.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.19. HTTP responses HTTP code Reponse body 200 - OK InstallPlan schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified InstallPlan Table 4.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.21. Body parameters Parameter Type Description body InstallPlan schema Table 4.22. HTTP responses HTTP code Reponse body 200 - OK InstallPlan schema 201 - Created InstallPlan schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/operatorhub_apis/installplan-operators-coreos-com-v1alpha1
12.2. Setting up Automated Notifications for the CA
12.2. Setting up Automated Notifications for the CA 12.2.1. Setting up Automated Notifications in the Console Open the Certificate Manager Console. Open the Configuration tab. Open the Certificate Manager heading in the navigation tree on the left. Then select Notification . The Notification tabs appear in the right side of the window. Notifications can be sent for three kinds of events: newly-issued certificates, revoked certificates, and new certificate requests. To send a notification for any event, select the tab, check the Enable checkbox, and specify information in the following fields: Sender's E-mail Address . Type the sender's full email address of the user who is notified of any delivery problems. Recipient's E-mail Address . These are the email addresses of the agents who will check the queue. To list more than one recipient, separate the email addresses with commas. For new requests in queue only. Subject . Type the subject title for the notification. Content template path . Type the path, including the filename, to the directory that contains the template to use to construct the message content. Click Save . Note Make sure the mail server is set up correctly. See Section 12.4, "Configuring a Mail Server for Certificate System Notifications" . Customize the notification message templates. See Section 12.3, "Customizing Notification Messages" for more information. Test the configuration. See Section 12.2.3, "Testing Configuration" . Note pkiconsole is being deprecated. 12.2.2. Configuring Specific Notifications by Editing the CS.cfg File Stop the CA subsystem. Open the CS.cfg file for that instance. This file is in the instance's conf/ directory. Edit all of the configuration parameters for the notification type being enabled. For certificate issuing notifications, there are four parameters: For certificate revocation notifications, there are four parameters: For certificate request notifications, there are five parameters: The parameters for the notification messages are explained in Section 12.2, "Setting up Automated Notifications for the CA" . Save the file. Restart the CA instance. If a job has been created to send automated messages, check that the mail server is correctly configured. See Section 12.4, "Configuring a Mail Server for Certificate System Notifications" . The messages that are sent automatically can be customized; see Section 12.3, "Customizing Notification Messages" for more information. 12.2.3. Testing Configuration To test whether the subsystem sends email notifications as configured, do the following: Change the email address in the notification configuration for the request in queue notification to an accessible agent or administrator email address. Open the end-entities page, and request a certificate using the agent-approved enrollment form. When the request gets queued for agent approval, a request-in-queue email notification should be sent. Check the message to see if it contains the configured information. Log into the agent interface, and approve the request. When the server issues a certificate, the user receive a certificate-issued email notification to the address listed in the request. Check the message to see if it has the correct information. Log into the agent interface, and revoke the certificate. The user email account should contain an email message reading that the certificate has been revoked. Check the message to see if it has the correct information.
[ "pkiconsole https://server.example.com:8443/ca", "pki-server stop instance_name", "ca.notification.certIssued.emailSubject ca.notification.certIssued.emailTemplate ca.notification.certIssued.enabled ca.notification.certIssued.senderEmail", "ca.notification.certRevoked.emailSubject ca.notification.certRevoked.emailTemplate ca.notification.certRevoked.enabled ca.notification.certRevoked.senderEmail", "ca.notification.requestInQ.emailSubject ca.notification.requestInQ.emailTemplate ca.notification.requestInQ.enabled ca.notification.requestInQ.recipientEmail ca.notification.requestInQ.senderEmail", "pki-server start instance_name" ]
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/setting_up_automated_notifications
12.6. Export Storage Domains
12.6. Export Storage Domains 12.6.1. Export Storage Domains Note The export storage domain is deprecated. Storage data domains can be unattached from a data center and imported to another data center in the same environment, or in a different environment. Virtual machines, floating virtual disk images, and templates can then be uploaded from the imported storage domain to the attached data center. See the Importing Existing Storage Domains section in the Red Hat Virtualization Administration Guide for information on importing storage domains. Storage domains with type set to export contain vms and templates sub-collections, which list the import candidate VMs and templates stored on that particular storage domain. Example 12.6. Listing the virtual machines sub-collection of an export storage domain VMs and templates in these collections have a similar representation to their counterparts in the top-level VMs and templates collection, except they also contain a storage_domain reference and an import action. The import action imports a virtual machine or a template from an export storage domain. The destination cluster and storage domain is specified with cluster and storage_domain references. Include an optional name element to give the virtual machine or template a specific name. Example 12.7. Action to import a virtual machine from an export storage domain Example 12.8. Action to import a template from an export storage domain Include an optional clone Boolean element to import the virtual machine as a new entity. Example 12.9. Action to import a virtual machine as a new entity Include an optional disks element to choose which disks to import using individual disk id elements. Example 12.10. Selecting disks for an import action Delete a virtual machine or template from an export storage domain with a DELETE request. Example 12.11. Delete virtual machine from an export storage domain
[ "GET /ovirt-engine/api/storagedomains/fabe0451-701f-4235-8f7e-e20e458819ed/vms Accept: application/xml HTTP/1.1 200 OK Content-Type: application/xml <vms> <vm id=\"082c794b-771f-452f-83c9-b2b5a19c0399\" href=\"/ovirt-engine/api/storagedomains/fabe0451-701f-4235-8f7e-e20e458819ed/ vms/082c794b-771f-452f-83c9-b2b5a19c0399\"> <name>vm1</name> <storage_domain id=\"fabe0451-701f-4235-8f7e-e20e458819ed\" href=\"/ovirt-engine/api/storagedomains/fabe0451-701f-4235-8f7e-e20e458819ed\"/> <actions> <link rel=\"import\" href=\"/ovirt-engine/api/storagedomains/ fabe0451-701f-4235-8f7e-e20e458819ed/vms/ 082c794b-771f-452f-83c9-b2b5a19c0399/import\"/> </actions> </vm> </vms>", "POST /ovirt-engine/api/storagedomains/fabe0451-701f-4235-8f7e-e20e458819ed/vms/ 082c794b-771f-452f-83c9-b2b5a19c0399/import HTTP/1.1 Accept: application/xml Content-type: application/xml <action> <storage_domain> <name>images0</name> </storage_domain> <cluster> <name>Default</name> </cluster> </action>", "POST /ovirt-engine/api/storagedomains/fabe0451-701f-4235-8f7e-e20e458819ed/templates/ 082c794b-771f-452f-83c9-b2b5a19c0399/import HTTP/1.1 Accept: application/xml Content-type: application/xml <action> <storage_domain> <name>images0</name> </storage_domain> <cluster> <name>Default</name> </cluster> </action>", "POST /ovirt-engine/api/storagedomains/fabe0451-701f-4235-8f7e-e20e458819ed/vms/ 082c794b-771f-452f-83c9-b2b5a19c0399/import HTTP/1.1 Accept: application/xml Content-type: application/xml <action> <storage_domain> <name>images0</name> </storage_domain> <cluster> <name>Default</name> </cluster> <clone>true</clone> <vm> <name>MyVM</name> </vm> </action>", "POST /ovirt-engine/api/storagedomains/fabe0451-701f-4235-8f7e-e20e458819ed/vms/ 082c794b-771f-452f-83c9-b2b5a19c0399/import HTTP/1.1 Accept: application/xml Content-type: application/xml <action> <cluster> <name>Default</name> </cluster> <vm> <name>MyVM</name> </vm> <disks> <disk id=\"4825ffda-a997-4e96-ae27-5503f1851d1b\"/> </disks> </action>", "DELETE /ovirt-engine/api/storagedomains/fabe0451-701f-4235-8f7e-e20e458819ed/vms/ 082c794b-771f-452f-83c9-b2b5a19c0399 HTTP/1.1 Accept: application/xml HTTP/1.1 204 No Content" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/version_3_rest_api_guide/sect-export_storage_domains
Chapter 27. Managing local storage by using RHEL system roles
Chapter 27. Managing local storage by using RHEL system roles To manage LVM and local file systems (FS) by using Ansible, you can use the storage role, which is one of the RHEL system roles available in RHEL 8. Using the storage role enables you to automate administration of file systems on disks and logical volumes on multiple machines and across all versions of RHEL starting with RHEL 7.7. 27.1. Creating an XFS file system on a block device by using the storage RHEL system role The example Ansible playbook applies the storage role to create an XFS file system on a block device using the default parameters. Note The storage role can create a file system only on an unpartitioned, whole disk or a logical volume (LV). It cannot create the file system on a partition. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - hosts: managed-node-01.example.com roles: - rhel-system-roles.storage vars: storage_volumes: - name: barefs type: disk disks: - sdb fs_type: xfs The volume name ( barefs in the example) is currently arbitrary. The storage role identifies the volume by the disk device listed under the disks: attribute. You can omit the fs_type: xfs line because XFS is the default file system in RHEL 8. To create the file system on an LV, provide the LVM setup under the disks: attribute, including the enclosing volume group. For details, see Creating or resizing a logical volume by using the storage RHEL system role . Do not provide the path to the LV device. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Additional resources /usr/share/ansible/roles/rhel-system-roles.storage/README.md file /usr/share/doc/rhel-system-roles/storage/ directory 27.2. Persistently mounting a file system by using the storage RHEL system role The example Ansible applies the storage role to immediately and persistently mount an XFS file system. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - hosts: managed-node-01.example.com roles: - rhel-system-roles.storage vars: storage_volumes: - name: barefs type: disk disks: - sdb fs_type: xfs mount_point: /mnt/data mount_user: somebody mount_group: somegroup mount_mode: 0755 This playbook adds the file system to the /etc/fstab file, and mounts the file system immediately. If the file system on the /dev/sdb device or the mount point directory do not exist, the playbook creates them. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Additional resources /usr/share/ansible/roles/rhel-system-roles.storage/README.md file /usr/share/doc/rhel-system-roles/storage/ directory 27.3. Creating or resizing a logical volume by using the storage RHEL system role Use the storage role to perform the following tasks: To create an LVM logical volume in a volume group consisting of many disks To resize an existing file system on LVM To express an LVM volume size in percentage of the pool's total size If the volume group does not exist, the role creates it. If a logical volume exists in the volume group, it is resized if the size does not match what is specified in the playbook. If you are reducing a logical volume, to prevent data loss you must ensure that the file system on that logical volume is not using the space in the logical volume that is being reduced. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Manage local storage hosts: managed-node-01.example.com tasks: - name: Create logical volume ansible.builtin.include_role: name: rhel-system-roles.storage vars: storage_pools: - name: myvg disks: - sda - sdb - sdc volumes: - name: mylv size: 2G fs_type: ext4 mount_point: /mnt/data The settings specified in the example playbook include the following: size: <size> You must specify the size by using units (for example, GiB) or percentage (for example, 60%). For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.storage/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification Verify that specified volume has been created or resized to the requested size: Additional resources /usr/share/ansible/roles/rhel-system-roles.storage/README.md file /usr/share/doc/rhel-system-roles/storage/ directory 27.4. Enabling online block discard by using the storage RHEL system role You can mount an XFS file system with the online block discard option to automatically discard unused blocks. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Manage local storage hosts: managed-node-01.example.com tasks: - name: Enable online block discard ansible.builtin.include_role: name: rhel-system-roles.storage vars: storage_volumes: - name: barefs type: disk disks: - sdb fs_type: xfs mount_point: /mnt/data mount_options: discard For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.storage/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification Verify that online block discard option is enabled: Additional resources /usr/share/ansible/roles/rhel-system-roles.storage/README.md file /usr/share/doc/rhel-system-roles/storage/ directory 27.5. Creating and mounting an Ext4 file system by using the storage RHEL system role The example Ansible playbook applies the storage role to create and mount an Ext4 file system. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - hosts: managed-node-01.example.com roles: - rhel-system-roles.storage vars: storage_volumes: - name: barefs type: disk disks: - sdb fs_type: ext4 fs_label: label-name mount_point: /mnt/data The playbook creates the file system on the /dev/sdb disk. The playbook persistently mounts the file system at the /mnt/data directory. The label of the file system is label-name . Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Additional resources /usr/share/ansible/roles/rhel-system-roles.storage/README.md file /usr/share/doc/rhel-system-roles/storage/ directory 27.6. Creating and mounting an Ext3 file system by using the storage RHEL system role The example Ansible playbook applies the storage role to create and mount an Ext3 file system. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - hosts: all roles: - rhel-system-roles.storage vars: storage_volumes: - name: barefs type: disk disks: - sdb fs_type: ext3 fs_label: label-name mount_point: /mnt/data mount_user: somebody mount_group: somegroup mount_mode: 0755 The playbook creates the file system on the /dev/sdb disk. The playbook persistently mounts the file system at the /mnt/data directory. The label of the file system is label-name . Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Additional resources /usr/share/ansible/roles/rhel-system-roles.storage/README.md file /usr/share/doc/rhel-system-roles/storage/ directory 27.7. Creating a swap volume by using the storage RHEL system role This section provides an example Ansible playbook. This playbook applies the storage role to create a swap volume, if it does not exist, or to modify the swap volume, if it already exist, on a block device by using the default parameters. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Create a disk device with swap hosts: managed-node-01.example.com roles: - rhel-system-roles.storage vars: storage_volumes: - name: swap_fs type: disk disks: - /dev/sdb size: 15 GiB fs_type: swap The volume name ( swap_fs in the example) is currently arbitrary. The storage role identifies the volume by the disk device listed under the disks: attribute. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Additional resources /usr/share/ansible/roles/rhel-system-roles.storage/README.md file /usr/share/doc/rhel-system-roles/storage/ directory 27.8. Configuring a RAID volume by using the storage RHEL system role With the storage system role, you can configure a RAID volume on RHEL by using Red Hat Ansible Automation Platform and Ansible-Core. Create an Ansible playbook with the parameters to configure a RAID volume to suit your requirements. Warning Device names might change in certain circumstances, for example, when you add a new disk to a system. Therefore, to prevent data loss, use persistent naming attributes in the playbook. For more information about persistent naming attributes, see Overview of persistent naming attributes . Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Manage local storage hosts: managed-node-01.example.com tasks: - name: Create a RAID on sdd, sde, sdf, and sdg ansible.builtin.include_role: name: rhel-system-roles.storage vars: storage_safe_mode: false storage_volumes: - name: data type: raid disks: [sdd, sde, sdf, sdg] raid_level: raid0 raid_chunk_size: 32 KiB mount_point: /mnt/data state: present For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.storage/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification Verify that the array was correctly created: Additional resources /usr/share/ansible/roles/rhel-system-roles.storage/README.md file /usr/share/doc/rhel-system-roles/storage/ directory Managing RAID 27.9. Configuring an LVM pool with RAID by using the storage RHEL system role With the storage system role, you can configure an LVM pool with RAID on RHEL by using Red Hat Ansible Automation Platform. You can set up an Ansible playbook with the available parameters to configure an LVM pool with RAID. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Manage local storage hosts: managed-node-01.example.com tasks: - name: Configure LVM pool with RAID ansible.builtin.include_role: name: rhel-system-roles.storage vars: storage_safe_mode: false storage_pools: - name: my_pool type: lvm disks: [sdh, sdi] raid_level: raid1 volumes: - name: my_volume size: "1 GiB" mount_point: "/mnt/app/shared" fs_type: xfs state: present For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.storage/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification Verify that your pool is on RAID: Additional resources /usr/share/ansible/roles/rhel-system-roles.storage/README.md file /usr/share/doc/rhel-system-roles/storage/ directory Managing RAID 27.10. Configuring a stripe size for RAID LVM volumes by using the storage RHEL system role With the storage system role, you can configure a stripe size for RAID LVM volumes on RHEL by using Red Hat Ansible Automation Platform. You can set up an Ansible playbook with the available parameters to configure an LVM pool with RAID. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Manage local storage hosts: managed-node-01.example.com tasks: - name: Configure stripe size for RAID LVM volumes ansible.builtin.include_role: name: rhel-system-roles.storage vars: storage_safe_mode: false storage_pools: - name: my_pool type: lvm disks: [sdh, sdi] volumes: - name: my_volume size: "1 GiB" mount_point: "/mnt/app/shared" fs_type: xfs raid_level: raid0 raid_stripe_size: "256 KiB" state: present For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.storage/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification Verify that stripe size is set to the required size: Additional resources /usr/share/ansible/roles/rhel-system-roles.storage/README.md file /usr/share/doc/rhel-system-roles/storage/ directory Managing RAID 27.11. Configuring an LVM-VDO volume by using the storage RHEL system role You can use the storage RHEL system role to create a VDO volume on LVM (LVM-VDO) with enabled compression and deduplication. Note Because of the storage system role use of LVM-VDO, only one volume can be created per pool. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Manage local storage hosts: managed-node-01.example.com tasks: - name: Create LVM-VDO volume under volume group 'myvg' ansible.builtin.include_role: name: rhel-system-roles.storage vars: storage_pools: - name: myvg disks: - /dev/sdb volumes: - name: mylv1 compression: true deduplication: true vdo_pool_size: 10 GiB size: 30 GiB mount_point: /mnt/app/shared The settings specified in the example playbook include the following: vdo_pool_size: <size> The actual size that the volume takes on the device. You can specify the size in human-readable format, such as 10 GiB. If you do not specify a unit, it defaults to bytes. size: <size> The virtual size of VDO volume. For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.storage/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification View the current status of compression and deduplication: Additional resources /usr/share/ansible/roles/rhel-system-roles.storage/README.md file /usr/share/doc/rhel-system-roles/storage/ directory 27.12. Creating a LUKS2 encrypted volume by using the storage RHEL system role You can use the storage role to create and configure a volume encrypted with LUKS by running an Ansible playbook. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Store your sensitive variables in an encrypted file: Create the vault: After the ansible-vault create command opens an editor, enter the sensitive data in the <key> : <value> format: luks_password: <password> Save the changes, and close the editor. Ansible encrypts the data in the vault. Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Manage local storage hosts: managed-node-01.example.com vars_files: - vault.yml tasks: - name: Create and configure a volume encrypted with LUKS ansible.builtin.include_role: name: rhel-system-roles.storage vars: storage_volumes: - name: barefs type: disk disks: - sdb fs_type: xfs fs_label: <label> mount_point: /mnt/data encryption: true encryption_password: "{{ luks_password }}" For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.storage/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification Find the luksUUID value of the LUKS encrypted volume: View the encryption status of the volume: Verify the created LUKS encrypted volume: Additional resources /usr/share/ansible/roles/rhel-system-roles.storage/README.md file /usr/share/doc/rhel-system-roles/storage/ directory Encrypting block devices by using LUKS Ansible vault 27.13. Creating shared LVM devices using the storage RHEL system role You can use the storage RHEL system role to create shared LVM devices if you want your multiple systems to access the same storage at the same time. This can bring the following notable benefits: Resource sharing Flexibility in managing storage resources Simplification of storage management tasks Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. lvmlockd is configured on the managed node. For more information, see Configuring LVM to share SAN disks among multiple machines . Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Manage local storage hosts: managed-node-01.example.com become: true tasks: - name: Create shared LVM device ansible.builtin.include_role: name: rhel-system-roles.storage vars: storage_pools: - name: vg1 disks: /dev/vdb type: lvm shared: true state: present volumes: - name: lv1 size: 4g mount_point: /opt/test1 storage_safe_mode: false storage_use_partitions: true For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.storage/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Additional resources /usr/share/ansible/roles/rhel-system-roles.storage/README.md file /usr/share/doc/rhel-system-roles/storage/ directory
[ "--- - hosts: managed-node-01.example.com roles: - rhel-system-roles.storage vars: storage_volumes: - name: barefs type: disk disks: - sdb fs_type: xfs", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "--- - hosts: managed-node-01.example.com roles: - rhel-system-roles.storage vars: storage_volumes: - name: barefs type: disk disks: - sdb fs_type: xfs mount_point: /mnt/data mount_user: somebody mount_group: somegroup mount_mode: 0755", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "--- - name: Manage local storage hosts: managed-node-01.example.com tasks: - name: Create logical volume ansible.builtin.include_role: name: rhel-system-roles.storage vars: storage_pools: - name: myvg disks: - sda - sdb - sdc volumes: - name: mylv size: 2G fs_type: ext4 mount_point: /mnt/data", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m command -a 'lvs myvg'", "--- - name: Manage local storage hosts: managed-node-01.example.com tasks: - name: Enable online block discard ansible.builtin.include_role: name: rhel-system-roles.storage vars: storage_volumes: - name: barefs type: disk disks: - sdb fs_type: xfs mount_point: /mnt/data mount_options: discard", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m command -a 'findmnt /mnt/data'", "--- - hosts: managed-node-01.example.com roles: - rhel-system-roles.storage vars: storage_volumes: - name: barefs type: disk disks: - sdb fs_type: ext4 fs_label: label-name mount_point: /mnt/data", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "--- - hosts: all roles: - rhel-system-roles.storage vars: storage_volumes: - name: barefs type: disk disks: - sdb fs_type: ext3 fs_label: label-name mount_point: /mnt/data mount_user: somebody mount_group: somegroup mount_mode: 0755", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "--- - name: Create a disk device with swap hosts: managed-node-01.example.com roles: - rhel-system-roles.storage vars: storage_volumes: - name: swap_fs type: disk disks: - /dev/sdb size: 15 GiB fs_type: swap", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "--- - name: Manage local storage hosts: managed-node-01.example.com tasks: - name: Create a RAID on sdd, sde, sdf, and sdg ansible.builtin.include_role: name: rhel-system-roles.storage vars: storage_safe_mode: false storage_volumes: - name: data type: raid disks: [sdd, sde, sdf, sdg] raid_level: raid0 raid_chunk_size: 32 KiB mount_point: /mnt/data state: present", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m command -a 'mdadm --detail /dev/md/data'", "--- - name: Manage local storage hosts: managed-node-01.example.com tasks: - name: Configure LVM pool with RAID ansible.builtin.include_role: name: rhel-system-roles.storage vars: storage_safe_mode: false storage_pools: - name: my_pool type: lvm disks: [sdh, sdi] raid_level: raid1 volumes: - name: my_volume size: \"1 GiB\" mount_point: \"/mnt/app/shared\" fs_type: xfs state: present", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m command -a 'lsblk'", "--- - name: Manage local storage hosts: managed-node-01.example.com tasks: - name: Configure stripe size for RAID LVM volumes ansible.builtin.include_role: name: rhel-system-roles.storage vars: storage_safe_mode: false storage_pools: - name: my_pool type: lvm disks: [sdh, sdi] volumes: - name: my_volume size: \"1 GiB\" mount_point: \"/mnt/app/shared\" fs_type: xfs raid_level: raid0 raid_stripe_size: \"256 KiB\" state: present", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m command -a 'lvs -o+stripesize /dev/my_pool/my_volume'", "--- - name: Manage local storage hosts: managed-node-01.example.com tasks: - name: Create LVM-VDO volume under volume group 'myvg' ansible.builtin.include_role: name: rhel-system-roles.storage vars: storage_pools: - name: myvg disks: - /dev/sdb volumes: - name: mylv1 compression: true deduplication: true vdo_pool_size: 10 GiB size: 30 GiB mount_point: /mnt/app/shared", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m command -a 'lvs -o+vdo_compression,vdo_compression_state,vdo_deduplication,vdo_index_state' LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert VDOCompression VDOCompressionState VDODeduplication VDOIndexState mylv1 myvg vwi-a-v--- 3.00t vpool0 enabled online enabled online", "ansible-vault create vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>", "luks_password: <password>", "--- - name: Manage local storage hosts: managed-node-01.example.com vars_files: - vault.yml tasks: - name: Create and configure a volume encrypted with LUKS ansible.builtin.include_role: name: rhel-system-roles.storage vars: storage_volumes: - name: barefs type: disk disks: - sdb fs_type: xfs fs_label: <label> mount_point: /mnt/data encryption: true encryption_password: \"{{ luks_password }}\"", "ansible-playbook --ask-vault-pass --syntax-check ~/playbook.yml", "ansible-playbook --ask-vault-pass ~/playbook.yml", "ansible managed-node-01.example.com -m command -a 'cryptsetup luksUUID /dev/sdb' 4e4e7970-1822-470e-b55a-e91efe5d0f5c", "ansible managed-node-01.example.com -m command -a 'cryptsetup status luks-4e4e7970-1822-470e-b55a-e91efe5d0f5c' /dev/mapper/luks-4e4e7970-1822-470e-b55a-e91efe5d0f5c is active and is in use. type: LUKS2 cipher: aes-xts-plain64 keysize: 512 bits key location: keyring device: /dev/sdb", "ansible managed-node-01.example.com -m command -a 'cryptsetup luksDump /dev/sdb' LUKS header information Version: 2 Epoch: 3 Metadata area: 16384 [bytes] Keyslots area: 16744448 [bytes] UUID: 4e4e7970-1822-470e-b55a-e91efe5d0f5c Label: (no label) Subsystem: (no subsystem) Flags: (no flags) Data segments: 0: crypt offset: 16777216 [bytes] length: (whole device) cipher: aes-xts-plain64 sector: 512 [bytes]", "--- - name: Manage local storage hosts: managed-node-01.example.com become: true tasks: - name: Create shared LVM device ansible.builtin.include_role: name: rhel-system-roles.storage vars: storage_pools: - name: vg1 disks: /dev/vdb type: lvm shared: true state: present volumes: - name: lv1 size: 4g mount_point: /opt/test1 storage_safe_mode: false storage_use_partitions: true", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/automating_system_administration_by_using_rhel_system_roles/managing-local-storage-using-rhel-system-roles_automating-system-administration-by-using-rhel-system-roles
Chapter 6. Conclusion
Chapter 6. Conclusion You have created a new workspace, a new page and added the following panels to that page: a tree menu on left, a logout button and a graphical panel for entering KPIs.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/using_the_dashboard_builder/conclusion
Chapter 5. Installing an IdM server: Without integrated DNS, with an integrated CA as the root CA
Chapter 5. Installing an IdM server: Without integrated DNS, with an integrated CA as the root CA This chapter describes how you can install a new Identity Management (IdM) server without integrated DNS. Note Red Hat strongly recommends installing IdM-integrated DNS for basic usage within the IdM deployment: When the IdM server also manages DNS, there is tight integration between DNS and native IdM tools which enables automating some of the DNS record management. For more details, see Planning your DNS services and host names . 5.1. Interactive installation During the interactive installation using the ipa-server-install utility, you are asked to supply basic configuration of the system, for example the realm, the administrator's password and the Directory Manager's password. The ipa-server-install installation script creates a log file at /var/log/ipaserver-install.log . If the installation fails, the log can help you identify the problem. This procedure installs a server: Without integrated DNS With integrated Identity Management (IdM) certificate authority (CA) as the root CA, which is the default CA configuration Procedure Run the ipa-server-install utility. The script prompts to configure an integrated DNS service. Press Enter to select the default no option. The script prompts for several required settings and offers recommended default values in brackets. To accept a default value, press Enter . To provide a custom value, enter the required value. Warning Plan these names carefully. You will not be able to change them after the installation is complete. Enter the passwords for the Directory Server superuser ( cn=Directory Manager ) and for the IdM administration system user account ( admin ). The script prompts for several required settings and offers recommended default values in brackets. To accept a default value, press Enter . To provide a custom value, enter the required value. Enter yes to confirm the server configuration. The installation script now configures the server. Wait for the operation to complete. The installation script produces a file with DNS resource records: the /tmp/ipa.system.records.UFRPto.db file in the example output below. Add these records to the existing external DNS servers. The process of updating the DNS records varies depending on the particular DNS solution. Important The server installation is not complete until you add the DNS records to the existing DNS servers. Additional resources For more information about the DNS resource records you must add to your DNS system, see IdM DNS records for external DNS systems . 5.2. Non-interactive installation You can install a server without integrated DNS or with integrated Identity Management (IdM) certificate authority (CA) as the root CA, which is the default CA configuration. Note The ipa-server-install installation script creates a log file at /var/log/ipaserver-install.log . If the installation fails, the log can help you identify the problem. Procedure Run the ipa-server-install utility with the options to supply all the required information. The minimum required options for non-interactive installation are: --realm to provide the Kerberos realm name --ds-password to provide the password for the Directory Manager (DM), the Directory Server super user --admin-password to provide the password for admin , the IdM administrator --unattended to let the installation process select default options for the host name and domain name For example: The installation script produces a file with DNS resource records: the /tmp/ipa.system.records.UFRPto.db file in the example output below. Add these records to the existing external DNS servers. The process of updating the DNS records varies depending on the particular DNS solution. Important The server installation is not complete until you add the DNS records to the existing DNS servers. Additional resources For more information about the DNS resource records you must add to your DNS system, see IdM DNS records for external DNS systems . For a complete list of options accepted by ipa-server-install , run the ipa-server-install --help command. 5.3. IdM DNS records for external DNS systems After installing an IdM server without integrated DNS, you must add LDAP and Kerberos DNS resource records for the IdM server to your external DNS system. The ipa-server-install installation script generates a file containing the list of DNS resource records with a file name in the format /tmp/ipa.system.records. <random_characters> .db and prints instructions to add those records: This is an example of the contents of the file: Note After adding the LDAP and Kerberos DNS resource records for the IdM server to your DNS system, ensure that the DNS management tools have not added PTR records for ipa-ca . The presence of PTR records for ipa-ca in your DNS could cause subsequent IdM replica installations to fail.
[ "ipa-server-install", "Do you want to configure integrated DNS (BIND)? [no]:", "Server host name [server.idm.example.com]: Please confirm the domain name [idm.example.com]: Please provide a realm name [IDM.EXAMPLE.COM]:", "Directory Manager password: IPA admin password:", "NetBIOS domain name [EXAMPLE]: Do you want to configure chrony with NTP server or pool address? [no]:", "Continue to configure the system with these values? [no]: yes", "Restarting the KDC Please add records in this file to your DNS system: /tmp/ipa.system.records.UFRBto.db Restarting the web server", "ipa-server-install --realm IDM.EXAMPLE.COM --ds-password DM_password --admin-password admin_password --unattended", "Restarting the KDC Please add records in this file to your DNS system: /tmp/ipa.system.records.UFRBto.db Restarting the web server", "Please add records in this file to your DNS system: /tmp/ipa.system.records.6zdjqxh3.db", "_kerberos-master._tcp.example.com. 86400 IN SRV 0 100 88 server.example.com. _kerberos-master._udp.example.com. 86400 IN SRV 0 100 88 server.example.com. _kerberos._tcp.example.com. 86400 IN SRV 0 100 88 server.example.com. _kerberos._udp.example.com. 86400 IN SRV 0 100 88 server.example.com. _kerberos.example.com. 86400 IN TXT \"EXAMPLE.COM\" _kpasswd._tcp.example.com. 86400 IN SRV 0 100 464 server.example.com. _kpasswd._udp.example.com. 86400 IN SRV 0 100 464 server.example.com. _ldap._tcp.example.com. 86400 IN SRV 0 100 389 server.example.com." ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/installing_identity_management/installing-an-ipa-server-without-integrated-dns_installing-identity-management
Networking
Networking Red Hat build of MicroShift 4.18 Configuring and managing cluster networking Red Hat OpenShift Documentation Team
[ "microshift show-config", "apiServer: advertiseAddress: 10.44.0.0/32 1 auditLog: maxFileAge: 0 maxFileSize: 200 maxFiles: 10 profile: Default namedCertificates: - certPath: \"\" keyPath: \"\" names: - \"\" subjectAltNames: [] debugging: logLevel: \"Normal\" dns: baseDomain: microshift.example.com etcd: memoryLimitMB: 0 ingress: defaultHTTPVersion: 1 forwardedHeaderPolicy: \"\" httpCompression: mimeTypes: - \"\" httpEmptyRequestsPolicy: Respond listenAddress: - \"\" logEmptyRequests: Log ports: http: 80 https: 443 routeAdmissionPolicy: namespaceOwnership: InterNamespaceAllowed status: Managed tuningOptions: clientFinTimeout: \"\" clientTimeout: \"\" headerBufferBytes: 0 headerBufferMaxRewriteBytes: 0 healthCheckInterval: \"\" maxConnections: 0 serverFinTimeout: \"\" serverTimeout: \"\" threadCount: 0 tlsInspectDelay: \"\" tunnelTimeout: \"\" kubelet: manifests: kustomizePaths: - /usr/lib/microshift/manifests - /usr/lib/microshift/manifests.d/* - /etc/microshift/manifests - /etc/microshift/manifests.d/* network: clusterNetwork: - 10.42.0.0/16 serviceNetwork: - 10.43.0.0/16 serviceNodePortRange: 30000-32767 node: hostnameOverride: \"\" nodeIP: \"\" 2 nodeIPv6: \"\" storage: driver: \"\" 3 optionalCsiComponents: 4 - \"\"", "sudo cp /etc/microshift/ovn.yaml.default /etc/microshift/ovn.yaml", "cat /etc/microshift/ovn.yaml", "mtu: 1400", "mtu: 1300", "export KUBECONFIG=USDPWD/kubeconfig", "pod=USD(oc get pods -n openshift-ovn-kubernetes | awk -F \" \" '/ovnkube-master/{print USD1}')", "oc -n openshift-ovn-kubernetes delete pod USDpod", "oc get pods -n openshift-ovn-kubernetes", "[Service] Environment=\"http_proxy=http://USDPROXY_USER:USDPROXY_PASSWORD@USDPROXY_SERVER:USDPROXY_PORT/\"", "sudo systemctl daemon-reload", "sudo systemctl restart rpm-ostreed.service", "sudo mkdir /etc/systemd/system/crio.service.d/", "[Service] Environment=NO_PROXY=\"localhost,127.0.0.1\" Environment=HTTP_PROXY=\"http://USDPROXY_USER:USDPROXY_PASSWORD@USDPROXY_SERVER:USDPROXY_PORT/\" Environment=HTTPS_PROXY=\"http://USDPROXY_USER:USDPROXY_PASSWORD@USDPROXY_SERVER:USDPROXY_PORT/\"", "sudo systemctl daemon-reload", "sudo systemctl restart crio", "sudo systemctl restart microshift", "oc get all -A", "sudo crictl images", "sudo ovs-vsctl show", "9d9f5ea2-9d9d-4e34-bbd2-dbac154fdc93 Bridge br-ex Port br-ex Interface br-ex type: internal Port patch-br-ex_localhost.localdomain-to-br-int 1 Interface patch-br-ex_localhost.localdomain-to-br-int type: patch options: {peer=patch-br-int-to-br-ex_localhost.localdomain} 2 Bridge br-int fail_mode: secure datapath_type: system Port patch-br-int-to-br-ex_localhost.localdomain Interface patch-br-int-to-br-ex_localhost.localdomain type: patch options: {peer=patch-br-ex_localhost.localdomain-to-br-int} Port eebee1ce5568761 Interface eebee1ce5568761 3 Port b47b1995ada84f4 Interface b47b1995ada84f4 4 Port \"3031f43d67c167f\" Interface \"3031f43d67c167f\" 5 Port br-int Interface br-int type: internal Port ovn-k8s-mp0 6 Interface ovn-k8s-mp0 type: internal ovs_version: \"2.17.3\"", "oc get pods -A", "NAMESPACE NAME READY STATUS RESTARTS AGE default i-06166fbb376f14a8bus-west-2computeinternal-debug-qtwcr 1/1 Running 0 46m kube-system csi-snapshot-controller-5c6586d546-lprv4 1/1 Running 0 51m openshift-dns dns-default-45jl7 2/2 Running 0 50m openshift-dns node-resolver-7wmzf 1/1 Running 0 51m openshift-ingress router-default-78b86fbf9d-qvj9s 1/1 Running 0 51m openshift-multus dhcp-daemon-j7qnf 1/1 Running 0 51m openshift-multus multus-r758z 1/1 Running 0 51m openshift-operator-lifecycle-manager catalog-operator-85fb86fcb9-t6zm7 1/1 Running 0 51m openshift-operator-lifecycle-manager olm-operator-87656d995-fvz84 1/1 Running 0 51m openshift-ovn-kubernetes ovnkube-master-5rfhh 4/4 Running 0 51m openshift-ovn-kubernetes ovnkube-node-gcnt6 1/1 Running 0 51m openshift-service-ca service-ca-bf5b7c9f8-pn6rk 1/1 Running 0 51m openshift-storage topolvm-controller-549f7fbdd5-7vrmv 5/5 Running 0 51m openshift-storage topolvm-node-rht2m 3/3 Running 0 50m", "NAMESPACE=<nginx-lb-test> 1", "oc create ns USDNAMESPACE", "apply -n USDNAMESPACE -f - <<EOF apiVersion: v1 kind: ConfigMap metadata: name: nginx data: headers.conf: | add_header X-Server-IP \\USDserver_addr always; --- apiVersion: apps/v1 kind: Deployment metadata: name: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - image: quay.io/packit/nginx-unprivileged imagePullPolicy: Always name: nginx ports: - containerPort: 8080 volumeMounts: - name: nginx-configs subPath: headers.conf mountPath: /etc/nginx/conf.d/headers.conf securityContext: allowPrivilegeEscalation: false seccompProfile: type: RuntimeDefault capabilities: drop: [\"ALL\"] runAsNonRoot: true volumes: - name: nginx-configs configMap: name: nginx items: - key: headers.conf path: headers.conf EOF", "oc get pods -n USDNAMESPACE", "create -n USDNAMESPACE -f - <<EOF apiVersion: v1 kind: Service metadata: name: nginx spec: ports: - port: 81 targetPort: 8080 selector: app: nginx type: LoadBalancer EOF", "oc get svc -n USDNAMESPACE", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx LoadBalancer 10.43.183.104 192.168.1.241 81:32434/TCP 2m", "EXTERNAL_IP=192.168.1.241 seq 5 | xargs -Iz curl -s -I http://USDEXTERNAL_IP:81 | grep X-Server-IP", "X-Server-IP: 10.42.0.41 X-Server-IP: 10.42.0.41 X-Server-IP: 10.42.0.43 X-Server-IP: 10.42.0.41 X-Server-IP: 10.42.0.43", "export NODEPORT=30700", "export INTERFACE_IP=192.168.150.33", "sudo nft -a insert rule ip nat PREROUTING tcp dport USDNODEPORT ip daddr USDINTERFACE_IP drop", "sudo nft -a list chain ip nat PREROUTING table ip nat { chain PREROUTING { # handle 1 type nat hook prerouting priority dstnat; policy accept; tcp dport 30700 ip daddr 192.168.150.33 drop # handle 134 counter packets 108 bytes 18074 jump OVN-KUBE-ETP # handle 116 counter packets 108 bytes 18074 jump OVN-KUBE-EXTERNALIP # handle 114 counter packets 108 bytes 18074 jump OVN-KUBE-NODEPORT # handle 112 } }", "sudo nft -a delete rule ip nat PREROUTING handle 134", "sudo oc get pod -n openshift-ovn-kubernetes <ovnkube-node-pod-name> -o json | jq -r '.spec.hostNetwork' true", "sudo vi /etc/sysconfig/firewalld FIREWALLD_ARGS=--debug=10", "sudo systemctl restart firewalld.service", "sudo systemd-cgls -u firewalld.service", "2023-06-28 10:46:37 DEBUG1: config.getZoneByName('public') 2023-06-28 10:46:37 DEBUG1: config.zone.7.addPort('8080', 'tcp') 2023-06-28 10:46:37 DEBUG1: config.zone.7.getSettings() 2023-06-28 10:46:37 DEBUG1: config.zone.7.update('...') 2023-06-28 10:46:37 DEBUG1: config.zone.7.Updated('public')", "2023-06-28 10:47:57 DEBUG1: config.getZoneByName('public') 2023-06-28 10:47:57 DEBUG2: config.zone.7.Introspect() 2023-06-28 10:47:57 DEBUG1: config.zone.7.removePort('8080', 'tcp') 2023-06-28 10:47:57 DEBUG1: config.zone.7.getSettings() 2023-06-28 10:47:57 DEBUG1: config.zone.7.update('...') 2023-06-28 10:47:57 DEBUG1: config.zone.7.Updated('public')", "journalctl -u crio | grep \"local port\"", "Jun 25 16:27:37 rhel92 crio[77216]: time=\"2023-06-25 16:27:37.033003098+08:00\" level=info msg=\"Opened local port tcp:443\"", "Jun 25 16:24:11 rhel92 crio[77216]: time=\"2023-06-25 16:24:11.342088450+08:00\" level=info msg=\"Closing host port tcp:443\"", "oc get pods -n openshift-ovn-kubernetes | awk '/ovnkube-master/{print USD1}'", "ovnkube-master-n2shv", "oc logs -n openshift-ovn-kubernetes <ovnkube-master-pod-name> ovnkube-master | grep -E \"OVN-KUBE-NODEPORT|OVN-KUBE-EXTERNALIP\"", "I0625 09:07:00.992980 2118395 iptables.go:27] Adding rule in table: nat, chain: OVN-KUBE-NODEPORT with args: \"-p TCP -m addrtype --dst-type LOCAL --dport 32718 -j DNAT --to-destination 10.96.178.142:8081\" for protocol: 0", "Deleting rule in table: nat, chain: OVN-KUBE-NODEPORT with args: \"-p TCP -m addrtype --dst-type LOCAL --dport 32718 -j DNAT --to-destination 10.96.178.142:8081\" for protocol: 0", "I0625 09:34:10.406067 128902 iptables.go:27] Adding rule in table: nat, chain: OVN-KUBE-EXTERNALIP with args: \"-p TCP -d 172.16.47.129 --dport 8081 -j DNAT --to-destination 10.43.114.94:8081\" for protocol: 0", "I0625 09:37:00.976953 128902 iptables.go:63] Deleting rule in table: nat, chain: OVN-KUBE-EXTERNALIP with args: \"-p TCP -d 172.16.47.129 --dport 8081 -j DNAT --to-destination 10.43.114.94:8081\" for protocol: 0", "NAME READY UP-TO-DATE AVAILABLE AGE router-default 1/1 1 1 2d23h", "ingress: listenAddress: - \"\" 1 ports: 2 http: 80 https: 443 routeAdmissionPolicy: namespaceOwnership: InterNamespaceAllowed 3 status: Managed 4", "ingress: ports: http: 80 https: 443 routeAdmissionPolicy: namespaceOwnership: InterNamespaceAllowed status: Removed 1", "sudo systemctl restart microshift", "oc -n openshift-ingress get svc", "No resources found in openshift-ingress namespace.", "ingress: ports: 1 http: 80 https: 443 routeAdmissionPolicy: namespaceOwnership: InterNamespaceAllowed status: Managed 2", "sudo systemctl restart microshift", "ingress: listenAddress: - \"<host_network>\" 1", "ingress: listenAddress: - 10.2.1.100", "ingress: listenAddress: - 10.2.1.100 - 10.2.2.10 - ens3", "sudo systemctl restart microshift", "ingress: routeAdmissionPolicy: namespaceOwnership: Strict 1", "sudo systemctl restart microshift", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default spec: podSelector: {} ingress: []", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default podSelector: {} policyTypes: - Ingress", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: {} ingress: - from: - podSelector: {}", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-http-and-https spec: podSelector: matchLabels: role: frontend ingress: - ports: - protocol: TCP port: 80 - protocol: TCP port: 443", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-pod-and-namespace-both spec: podSelector: matchLabels: name: test-pods ingress: - from: - namespaceSelector: matchLabels: project: project_name podSelector: matchLabels: name: test-pods", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: test-network-policy spec: podSelector: {} ingress: - from: - podSelector: matchLabels: role: frontend - from: - podSelector: matchLabels: role: backend", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: test-network-policy spec: podSelector: {} ingress: - from: - podSelector: matchExpressions: - {key: role, operator: In, values: [frontend, backend]}", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: policy1 spec: podSelector: matchLabels: role: db ingress: - from: - podSelector: matchLabels: role: frontend --- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: policy2 spec: podSelector: matchLabels: role: client ingress: - from: - podSelector: matchLabels: role: frontend", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: policy3 spec: podSelector: matchExpressions: - {key: role, operator: In, values: [db, client]} ingress: - from: - podSelector: matchLabels: role: frontend", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-27107 1 spec: podSelector: 2 matchLabels: app: mongodb ingress: - from: - podSelector: 3 matchLabels: app: app ports: 4 - protocol: TCP port: 27017", "touch <policy_name>.yaml", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default spec: podSelector: {} policyTypes: - Ingress ingress: []", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: ingress: - from: - podSelector: {}", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-traffic-pod spec: podSelector: matchLabels: pod: pod-a policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: namespace-y", "oc apply -f <policy_name>.yaml -n <namespace>", "networkpolicy.networking.k8s.io/deny-by-default created", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default namespace: default 1 spec: podSelector: {} 2 ingress: [] 3", "oc apply -f deny-by-default.yaml", "networkpolicy.networking.k8s.io/deny-by-default created", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: web-allow-external namespace: default spec: policyTypes: - Ingress podSelector: matchLabels: app: web ingress: - {}", "oc apply -f web-allow-external.yaml", "networkpolicy.networking.k8s.io/web-allow-external created", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: web-allow-all-namespaces namespace: default spec: podSelector: matchLabels: app: web 1 policyTypes: - Ingress ingress: - from: - namespaceSelector: {} 2", "oc apply -f web-allow-all-namespaces.yaml", "networkpolicy.networking.k8s.io/web-allow-all-namespaces created", "oc run web --namespace=default --image=nginx --labels=\"app=web\" --expose --port=80", "oc run test-USDRANDOM --namespace=secondary --rm -i -t --image=alpine -- sh", "wget -qO- --timeout=2 http://web.default", "<!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href=\"http://nginx.org/\">nginx.org</a>.<br/> Commercial support is available at <a href=\"http://nginx.com/\">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: web-allow-prod namespace: default spec: podSelector: matchLabels: app: web 1 policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: purpose: production 2", "oc apply -f web-allow-prod.yaml", "networkpolicy.networking.k8s.io/web-allow-prod created", "oc run web --namespace=default --image=nginx --labels=\"app=web\" --expose --port=80", "oc create namespace prod", "oc label namespace/prod purpose=production", "oc create namespace dev", "oc label namespace/dev purpose=testing", "oc run test-USDRANDOM --namespace=dev --rm -i -t --image=alpine -- sh", "wget -qO- --timeout=2 http://web.default", "wget: download timed out", "oc run test-USDRANDOM --namespace=prod --rm -i -t --image=alpine -- sh", "wget -qO- --timeout=2 http://web.default", "<!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href=\"http://nginx.org/\">nginx.org</a>.<br/> Commercial support is available at <a href=\"http://nginx.com/\">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>", "oc get networkpolicy", "oc apply -n <namespace> -f <policy_file>.yaml", "oc edit networkpolicy <policy_name> -n <namespace>", "oc describe networkpolicy <policy_name> -n <namespace>", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-27107 1 spec: podSelector: 2 matchLabels: app: mongodb ingress: - from: - podSelector: 3 matchLabels: app: app ports: 4 - protocol: TCP port: 27017", "oc delete networkpolicy <policy_name> -n <namespace>", "networkpolicy.networking.k8s.io/default-deny deleted", "oc get networkpolicy", "oc describe networkpolicy <policy_name> -n <namespace>", "oc describe networkpolicy allow-same-namespace", "Name: allow-same-namespace Namespace: ns1 Created on: 2021-05-24 22:28:56 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: PodSelector: <none> Not affecting egress traffic Policy Types: Ingress", "sudo dnf install microshift-multus", "sudo systemctl restart", "oc get pod -A | grep multus", "openshift-multus dhcp-daemon-ktzqf 1/1 Running 0 45h openshift-multus multus-4frf4 1/1 Running 0 45h", "apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: bridge-conf spec: config: '{ \"cniVersion\": \"0.4.0\", \"type\": \"bridge\", \"bridge\": \"test-bridge\", \"mode\": \"bridge\", \"ipam\": { \"type\": \"host-local\", \"ranges\": [ [ { \"subnet\": \"10.10.0.0/16\", \"rangeStart\": \"10.10.1.20\", \"rangeEnd\": \"10.10.3.50\", \"gateway\": \"10.10.0.254\" } ] ], \"dataDir\": \"/var/lib/cni/test-bridge\" } }'", "{ \"cniVersion\": \"0.3.1\", \"name\": \"ipvlan-net\", \"type\": \"ipvlan\", \"master\": \"eth1\", \"linkInContainer\": false, \"mode\": \"l3\", \"ipam\": { \"type\": \"static\", \"addresses\": [ { \"address\": \"192.168.10.10/24\" } ] } }", "{ \"cniVersion\": \"0.3.1\", \"name\": \"macvlan-net\", \"type\": \"macvlan\", \"master\": \"eth1\", \"linkInContainer\": false, \"mode\": \"bridge\", \"ipam\": { \"type\": \"dhcp\" } }", "oc get pods -n openshift-multus", "NAME READY STATUS RESTARTS AGE dhcp-daemon-dfbzw 1/1 Running 0 5h multus-rz8xc 1/1 Running 0 5h", "oc apply -f network-attachment-definition.yaml", "apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: bridge-conf spec: config: '{ \"cniVersion\": \"0.4.0\", \"type\": \"bridge\", 1 \"bridge\": \"br-test\", 2 \"mode\": \"bridge\", \"ipam\": { \"type\": \"host-local\", 3 \"ranges\": [ [ { \"subnet\": \"10.10.0.0/24\", \"rangeStart\": \"10.10.0.20\", \"rangeEnd\": \"10.10.0.50\", \"gateway\": \"10.10.0.254\" } ], [ { \"subnet\": \"fd00:IJKL:MNOP:10::0/64\", 4 \"rangeStart\": \"fd00:IJKL:MNOP:10::1\", \"rangeEnd\": \"fd00:IJKL:MNOP:10::9\" \"dataDir\": \"/var/lib/cni/br-test\" } }'", "apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/networks: <network> [, <network> ,...] 1", "apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/networks: bridge-conf", "apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/networks: |- [ { \"name\": \"<network>\", 1 \"namespace\": \"<namespace>\", 2 \"default-route\": [\"<default-route>\"] 3 } ]", "oc apply -f ./ <test_bridge> .yaml 1", "pod/test_bridge created", "apiVersion: v1 kind: Pod metadata: name: test_bridge annotations: k8s.v1.cni.cncf.io/networks: bridge-conf labels: app: test_bridge spec: terminationGracePeriodSeconds: 0 containers: - name: hello-microshift image: quay.io/microshift/busybox:1.36 command: [\"/bin/sh\"] args: [\"-c\", \"while true; do echo -ne \\\"HTTP/1.0 200 OK\\r\\nContent-Length: 16\\r\\n\\r\\nHello MicroShift\\\" | nc -l -p 8080 ; done\"] ports: - containerPort: 8080 protocol: TCP securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL runAsNonRoot: true runAsUser: 1001 runAsGroup: 1001 seccompProfile: type: RuntimeDefault", "apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/networks: bridge-conf", "oc get pod <name> -o yaml 1", "oc get pod <test_bridge> -o yaml", "apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/networks: bridge-conf k8s.v1.cni.cncf.io/network-status: |- 1 [{ \"name\": \"ovn-kubernetes\", \"interface\": \"eth0\", \"ips\": [ \"10.42.0.18\" ], \"default\": true, \"dns\": {} },{ \"name\": \"bridge-conf\", \"interface\": \"net1\", \"ips\": [ \"20.2.2.100\" ], \"mac\": \"22:2f:60:a5:f8:00\", \"dns\": {} }] name: pod namespace: default spec: status:", "oc get pod", "NAME READY STATUS RESTARTS AGE test_bridge 1/1 Running 0 81s", "ip a show br-test", "22: br-test: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 96:bf:ca:be:1d:15 brd ff:ff:ff:ff:ff:ff inet6 fe80::34e2:bbff:fed2:31f2/64 scope link valid_lft forever preferred_lft forever", "sudo ip addr add 10.10.0.10/24 dev br-test", "ip a show br-test", "22: br-test: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 96:bf:ca:be:1d:15 brd ff:ff:ff:ff:ff:ff inet 10.10.0.10/24 scope global br-test 1 valid_lft forever preferred_lft forever inet6 fe80::34e2:bbff:fed2:31f2/64 scope link valid_lft forever preferred_lft forever", "oc get pod test-bridge --output=jsonpath='{.metadata.annotations.k8s\\.v1\\.cni\\.cncf\\.io/network-status}'", "[{ \"name\": \"ovn-kubernetes\", \"interface\": \"eth0\", \"ips\": [ \"10.42.0.17\" ], \"mac\": \"0a:58:0a:2a:00:11\", \"default\": true, \"dns\": {} },{ \"name\": \"default/bridge-conf\", 1 \"interface\": \"net1\", \"ips\": [ \"10.10.0.20\" ], \"mac\": \"82:01:98:e5:0c:b7\", \"dns\": {}", "oc exec -ti test-bridge -- ip a", "1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0@if21: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue link/ether 0a:58:0a:2a:00:11 brd ff:ff:ff:ff:ff:ff inet 10.42.0.17/24 brd 10.42.0.255 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::858:aff:fe2a:11/64 scope link valid_lft forever preferred_lft forever 3: net1@if23: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue link/ether 82:01:98:e5:0c:b7 brd ff:ff:ff:ff:ff:ff inet 10.10.0.20/24 brd 10.10.0.255 scope global net1 1 valid_lft forever preferred_lft forever inet6 fe80::8001:98ff:fee5:cb7/64 scope link valid_lft forever preferred_lft forever", "curl 10.10.0.20:8080", "Hello MicroShift", "oc delete pod <name> -n <namespace>", "Warning NoNetworkFound 0s multus cannot find a network-attachment-definitio (asdasd) in namespace (default): network-attachment-definitions.k8s.cni.cncf.io \"bad-ref-doesnt-exist\" not found", "Feb 06 13:47:31 dev microshift[1494]: kubelet E0206 13:47:31.163290 1494 pod_workers.go:1298] \"Error syncing pod, skipping\" err=\"network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?\" pod=\"default/samplepod\" podUID=\"fe0f7f7a-8c47-4488-952b-8abc0d8e2602\"", "cannot find a network-attachment-definition (bad-conf) in namespace (default): network-attachment-definitions.k8s.cni.cncf.io \"bad-conf\" not found\" pod=\"default/samplepod\"`", "\"CreatePodSandbox for pod failed\" err=\"rpc error: code = Unknown desc = failed to create pod network sandbox k8s_samplepod_default_5fa13105-1bfb-4c6b-aee7-3437cfb50e25_0(7517818bd8e85f07b551f749c7529be88b4e7daef0dd572d049aa636950c76c6): error adding pod default_samplepod to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [default/samplepod/5fa13105-1bfb-4c6b-aee7-3437cfb50e25]: error loading k8s delegates k8s args: TryLoadPodDelegates: error in getting k8s network for pod: GetNetworkDelegates: failed getting the delegate: getKubernetesDelegate: cannot find a network-attachment-definition (bad-conf) in namespace (default): network-attachment-definitions.k8s.cni.cncf.io \\\"bad-conf\\\" not found\" pod=\"default/samplepod\"", "oc expose pod hello-microshift -n USDnamespace", "oc expose svc/hello-microshift --hostname=microshift.com USDnamespace", "oc get routes -o yaml <name of resource> -n USDnamespace 1", "apiVersion: route.openshift.io/v1 kind: Route metadata: name: hello-microshift namespace: hello-microshift spec: host: microshift.com 1 port: targetPort: 8080 2 to: kind: Service name: hello-microshift", "oc annotate route <route_name> -n <namespace> --overwrite=true \"haproxy.router.openshift.io/hsts_header=max-age=31536000; includeSubDomains;preload\"", "apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/hsts_header: max-age=31536000;includeSubDomains;preload 1 2 3 spec: host: def.abc.com tls: termination: \"reencrypt\" wildcardPolicy: \"Subdomain\"", "oc annotate route <route_name> -n <namespace> --overwrite=true \"haproxy.router.openshift.io/hsts_header\"=\"max-age=0\"", "metadata: annotations: haproxy.router.openshift.io/hsts_header: max-age=0", "oc annotate route --all -n <namespace> --overwrite=true \"haproxy.router.openshift.io/hsts_header\"=\"max-age=0\"", "oc get route --all-namespaces -o go-template='{{range .items}}{{if .metadata.annotations}}{{USDa := index .metadata.annotations \"haproxy.router.openshift.io/hsts_header\"}}{{USDn := .metadata.name}}{{with USDa}}Name: {{USDn}} HSTS: {{USDa}}{{\"\\n\"}}{{else}}{{\"\"}}{{end}}{{end}}{{end}}'", "Name: routename HSTS: max-age=0", "oc annotate route --all --all-namespaces --overwrite=true \"haproxy.router.openshift.io/hsts_header\"=\"max-age=31536000;preload;includeSubDomains\"", "oc annotate route --all -n <my_namespace> --overwrite=true \"haproxy.router.openshift.io/hsts_header\"=\"max-age=31536000;preload;includeSubDomains\" 1", "oc get route --all-namespaces -o go-template='{{range .items}}{{if .metadata.annotations}}{{USDa := index .metadata.annotations \"haproxy.router.openshift.io/hsts_header\"}}{{USDn := .metadata.name}}{{with USDa}}Name: {{USDn}} HSTS: {{USDa}}{{\"\\n\"}}{{else}}{{\"\"}}{{end}}{{end}}{{end}}'", "Name: <_routename_> HSTS: max-age=31536000;preload;includeSubDomains", "tcpdump -s 0 -i any -w /tmp/dump.pcap host <podip 1> && host <podip 2> 1", "tcpdump -s 0 -i any -w /tmp/dump.pcap port 4789", "oc annotate route <route_name> router.openshift.io/cookie_name=\"<cookie_name>\"", "oc annotate route my_route router.openshift.io/cookie_name=\"my_cookie\"", "ROUTE_NAME=USD(oc get route <route_name> -o jsonpath='{.spec.host}')", "curl USDROUTE_NAME -k -c /tmp/cookie_jar", "curl USDROUTE_NAME -k -b /tmp/cookie_jar", "apiVersion: route.openshift.io/v1 kind: Route metadata: name: route-unsecured spec: host: www.example.com path: \"/test\" 1 to: kind: Service name: service-name", "apiVersion: route.openshift.io/v1 kind: Route spec: httpHeaders: actions: response: - name: X-Frame-Options action: type: Set set: value: SAMEORIGIN", "apiVersion: route.openshift.io/v1 kind: Route spec: host: app.example.com tls: termination: edge to: kind: Service name: app-example httpHeaders: actions: 1 response: 2 - name: Content-Location 3 action: type: Set 4 set: value: /lang/en-us 5", "oc -n app-example create -f app-example-route.yaml", "apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: frontend annotations: route.openshift.io/termination: \"reencrypt\" 1 route.openshift.io/destination-ca-certificate-secret: secret-ca-cert 2 spec: rules: - host: www.example.com 3 http: paths: - backend: service: name: frontend port: number: 443 path: / pathType: Prefix tls: - hosts: - www.example.com secretName: example-com-tls-certificate", "spec: rules: - host: www.example.com http: paths: - path: '' pathType: ImplementationSpecific backend: service: name: frontend port: number: 443", "oc apply -f ingress.yaml", "oc get routes", "NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD frontend-gnztq www.example.com frontend 443 reencrypt/Redirect None", "apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend-gnztq ownerReferences: - apiVersion: networking.k8s.io/v1 controller: true kind: Ingress name: frontend uid: 4e6c59cc-704d-4f44-b390-617d879033b6 spec: host: www.example.com path: / port: targetPort: https tls: certificate: | -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- insecureEdgeTerminationPolicy: Redirect key: | -----BEGIN RSA PRIVATE KEY----- [...] -----END RSA PRIVATE KEY----- termination: reencrypt destinationCACertificate: | -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- to: kind: Service name: frontend", "apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: frontend spec: rules: tls: - {} 1", "oc create -f example-ingress.yaml", "oc get routes -o yaml", "apiVersion: v1 items: - apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend-j9sdd 1 spec: tls: 2 insecureEdgeTerminationPolicy: Redirect termination: edge 3", "oc create secret generic dest-ca-cert --from-file=tls.crt=<file_path>", "oc -n test-ns create secret generic dest-ca-cert --from-file=tls.crt=tls.crt", "secret/dest-ca-cert created", "apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: frontend annotations: route.openshift.io/termination: \"reencrypt\" route.openshift.io/destination-ca-certificate-secret: secret-ca-cert 1", "apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend annotations: route.openshift.io/termination: reencrypt route.openshift.io/destination-ca-certificate-secret: secret-ca-cert spec: tls: insecureEdgeTerminationPolicy: Redirect termination: reencrypt destinationCACertificate: | -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE-----", "rpm -q firewalld", "sudo dnf install -y firewalld", "sudo systemctl enable firewalld --now", "sudo firewall-cmd --permanent --zone=trusted --add-source=10.42.0.0/16", "sudo firewall-cmd --permanent --zone=trusted --add-source=169.254.169.1", "sudo firewall-cmd --permanent --zone=public --add-port=<port number>/<port protocol>", "sudo firewall-cmd --permanent --zone=public --add-port=6443/tcp", "sudo firewall-cmd --get-services", "sudo firewall-cmd --add-service=mdns", "sudo firewall-offline-cmd --permanent --zone=trusted --add-source=10.42.0.0/16", "sudo firewall-offline-cmd --permanent --zone=trusted --add-source=<custom IP range>", "sudo firewall-offline-cmd --permanent --zone=trusted --add-source=169.254.169.1", "sudo firewall-cmd --permanent --zone=trusted --add-source=fd01::/48", "sudo firewall-cmd --reload", "sudo firewall-cmd --list-all", "sudo firewall-cmd --zone=trusted --list-all", "sudo systemctl stop microshift", "sudo systemctl stop kubepods.slice", "sudo /usr/bin/microshift-cleanup-data --ovn", "IP=\"10.44.0.1\" 1 sudo nmcli con add type loopback con-name stable-microshift ifname lo ip4 USD{IP}/32", "sudo nmcli conn modify stable-microshift ipv4.ignore-auto-dns yes", "sudo nmcli conn modify stable-microshift ipv4.dns \"10.44.1.1\"", "NAME=\"USD(hostnamectl hostname)\"", "echo \"USDIP USDNAME\" | sudo tee -a /etc/hosts >/dev/null", "sudo tee /etc/microshift/config.yaml > /dev/null <<EOF node: hostnameOverride: USD(echo USDNAME) nodeIP: USD(echo USDIP) EOF", "sudo systemctl reboot 1", "export KUBECONFIG=/var/lib/microshift/resources/kubeadmin/kubeconfig sudo -E oc get pods -A", "NAMESPACE NAME READY STATUS RESTARTS AGE kube-system csi-snapshot-controller-74d566564f-66n2f 1/1 Running 0 1m openshift-dns dns-default-dxglm 2/2 Running 0 1m openshift-dns node-resolver-dbf5v 1/1 Running 0 1m openshift-ingress router-default-8575d888d8-xmq9p 1/1 Running 0 1m openshift-ovn-kubernetes ovnkube-master-gcsx8 4/4 Running 1 1m openshift-ovn-kubernetes ovnkube-node-757mf 1/1 Running 1 1m openshift-service-ca service-ca-7d7c579f54-68jt4 1/1 Running 0 1m openshift-storage topolvm-controller-6d777f795b-bx22r 5/5 Running 0 1m openshift-storage topolvm-node-fcf8l 4/4 Running 0 1m" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_microshift/4.18/html-single/networking/index
Chapter 61. Storage
Chapter 61. Storage No support for thin provisioning on top of RAID in a cluster While RAID logical volumes and thinly provisioned logical volumes can be used in a cluster when activated exclusively, there is currently no support for thin provisioning on top of RAID in a cluster. This is the case even if the combination is activated exclusively. Currently this combination is only supported in LVM's single machine non-clustered mode. (BZ# 1014758 ) Interaction problems with the lvmetad daemon when mirror segment type is used. When the legacy mirror segment type is used to create mirrored logical volumes with 3 or more legs, there can be interaction problems with the lvmetad daemon. Problems observed occur only after a second device failure, when mirror fault policies are set to the non-default allocate option, when lvmetad is used, and there has been no reboot of the machine between device failure events. The simplest workaround is to disable lvmetad by setting use_lvmetad = 0 in the lvm.conf file. These issues do not arise with the raid1 segment type, which is the default type for Red Hat Enterprise Linux 7. (BZ# 1380521 ) Important restrictions for Red Hat Enterprise Linux 7.3 upgrades on systems with RAID4 and RAID10 logical volumes The following important restrictions apply to Red Hat Enterprise Linux 7.3 upgrades on systems with RAID4 and RAID10 logical volumes: Do not upgrade any systems with existing LVM RAID4 or RAID10 logical volumes to Red Hat Enterprise Linux 7.3 because these logical volumes will fail to activate. All other types are unaffected. If you do not have any existing RAID4 or RAID10 logical volumes and you upgrade, do not create any new RAID4 logical volumes because those may fail to activate with later releases and updates. It is safe to create RAID10 logical volumes on Red Hat Enterprise Linux 7.3. A z-stream fix is being worked on to allow for the activation of existing RAID4 and RAID10 logical volumes and the creation of new RAID4 logical volumes with Red Hat Enterprise Linux 7.3. (BZ#1385149) The system sometimes becomes unresponsive if there are no working network paths to the iSCSI target When using iSCSI targets, it is required to have a continuous multipathing from initiator to target, as it is required for zfcp attached SCSI logical unit number (LUNs). If swap is on iSCSI and the system is under memory pressure when an error recovery occurs in the network path, then the system needs some additional memory for the error recovery. As a consequence, the system can become unresponsive. To work around this problem, have at least one working network path to the iSCSI target to make obtaining memory from swap possible. (BZ#1389245) Exit code returned from the lvextend command has changed Previously, if the lvextend or lvresize commands were run in a way that would result in no change to the size of the logical volume, an attempt was still made to resize the file system. The unnecessary attempt to resize the file system is no longer made and this has caused the exit code of the command to change. LVM makes no guarantees of the consistency of exit codes beyond zero (success) and non-zero (failure). (BZ# 1354396 )
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.3_release_notes/known_issues_storage
5.5. Adding and Deleting Members
5.5. Adding and Deleting Members The procedures to add or delete a cluster member vary depending on whether the cluster is a newly configured cluster or a cluster that is already configured and running. To add a member to a new cluster, refer to Section 5.5.1, "Adding a Member to a New Cluster" . To add or delete a cluster member in an existing cluster, refer to the following sections: Section 5.5.2, "Adding a Member to a Running DLM Cluster" Section 5.5.3, "Deleting a Member from a DLM Cluster" Section 5.5.4, "Adding a GULM Client-only Member" Section 5.5.5, "Deleting a GULM Client-only Member" Section 5.5.6, "Adding or Deleting a GULM Lock Server Member" 5.5.1. Adding a Member to a New Cluster To add a member to a new cluster, follow these steps: At system-config-cluster , in the Cluster Configuration Tool tab, click Cluster Node . At the bottom of the right frame (labeled Properties ), click the Add a Cluster Node button. Clicking that button causes a Node Properties dialog box to be displayed. For a DLM cluster, the Node Properties dialog box presents text boxes for Cluster Node Name and Quorum Votes (refer to Figure 5.5, "Adding a Member to a New DLM Cluster" ). For a GULM cluster, the Node Properties dialog box presents text boxes for Cluster Node Name and Quorum Votes , and presents a checkbox for GULM Lockserver (refer to Figure 5.6, "Adding a Member to a New GULM Cluster" ) Important The number of nodes that can be configured as GULM lock servers is limited to either one, three, or five. Figure 5.5. Adding a Member to a New DLM Cluster Figure 5.6. Adding a Member to a New GULM Cluster At the Cluster Node Name text box, specify a node name. The entry can be a name or an IP address of the node on the cluster subnet. Note Each node must be on the same subnet as the node from which you are running the Cluster Configuration Tool and must be defined either in DNS or in the /etc/hosts file of each cluster node. Note The node on which you are running the Cluster Configuration Tool must be explicitly added as a cluster member; the node is not automatically added to the cluster configuration as a result of running the Cluster Configuration Tool . Optionally, at the Quorum Votes text box, you can specify a value; however in most configurations you can leave it blank. Leaving the Quorum Votes text box blank causes the quorum votes value for that node to be set to the default value of 1 . Click OK . Configure fencing for the node: Click the node that you added in the step. At the bottom of the right frame (below Properties ), click Manage Fencing For This Node . Clicking Manage Fencing For This Node causes the Fence Configuration dialog box to be displayed. At the Fence Configuration dialog box, bottom of the right frame (below Properties ), click Add a New Fence Level . Clicking Add a New Fence Level causes a fence-level element (for example, Fence-Level-1 , Fence-Level-2 , and so on) to be displayed below the node in the left frame of the Fence Configuration dialog box. Click the fence-level element. At the bottom of the right frame (below Properties ), click Add a New Fence to this Level . Clicking Add a New Fence to this Level causes the Fence Properties dialog box to be displayed. At the Fence Properties dialog box, click the Fence Device Type drop-down box and select the fence device for this node. Also, provide additional information required (for example, Port and Switch for an APC Power Device). At the Fence Properties dialog box, click OK . Clicking OK causes a fence device element to be displayed below the fence-level element. To create additional fence devices at this fence level, return to step 6d. Otherwise, proceed to the step. To create additional fence levels, return to step 6c. Otherwise, proceed to the step. If you have configured all the fence levels and fence devices for this node, click Close . Choose File => Save to save the changes to the cluster configuration. To continue configuring a new cluster, proceed to Section 5.6, "Configuring a Failover Domain" .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_administration/s1-add-delete-member-ca
Chapter 107. Password schema reference
Chapter 107. Password schema reference Used in: KafkaUserScramSha512ClientAuthentication Property Property type Description valueFrom PasswordSource Secret from which the password should be read.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-password-reference
Chapter 1. Using Argo Rollouts for progressive deployment delivery
Chapter 1. Using Argo Rollouts for progressive deployment delivery Important Argo Rollouts is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Progressive delivery is the process of releasing product updates in a controlled and gradual manner. Progressive delivery reduces the risk of a release by exposing the new version of a product update only to a subset of users initially. The process involves continuously observing and analyzing this new version to verify whether its behavior matches the requirements and expectations set. The verifications continue as the process gradually exposes the product update to a broader and wider audience. OpenShift Container Platform provides some progressive delivery capability by using routes to split traffic between different services, but this typically requires manual intervention and management. With Argo Rollouts, you can use automation and metric analysis to support progressive deployment delivery and drive the automated rollout or rollback of a new version of an application. Argo Rollouts provide advanced deployment capabilities and enable integration with ingress controllers and service meshes. You can use Argo Rollouts to manage multiple replica sets that represent different versions of the deployed application. Depending on your deployment strategy, you can handle traffic to these versions during an update by optimizing their existing traffic shaping abilities and gradually shifting traffic to the new version. You can combine Argo Rollouts with a metric provider like Prometheus to do metric-based and policy-driven rollouts and rollbacks based on the parameters set. 1.1. Prerequisites You have access to the cluster with cluster-admin privileges. You have access to the OpenShift Container Platform web console. Red Hat OpenShift GitOps 1.9.0 or a newer version is installed on your cluster. 1.2. Benefits of Argo Rollouts Managing and coordinating advanced deployment strategies in traditional infrastructure often involves long maintenance windows. Automation with tools like OpenShift Container Platform and Red Hat OpenShift GitOps can reduce these windows, but setting up these strategies can still be challenging. With Argo Rollouts, you simplify this process by allowing application teams to define their rollout strategy declaratively. Teams no longer need to define multiple deployments and services or create automation for traffic shaping and integration of tests. Using Argo Rollouts, you can encapsulate all the required definitions for a declarative rollout strategy, automate and manage the process. Using Argo Rollouts as a default workload in Red Hat OpenShift GitOps provides the following benefits: Automated progressive delivery as part of the GitOps workflow Advanced deployment capabilities Optimize the existing advanced deployment strategies such as blue-green or canary Zero downtime updates for deployments Fine-grained, weighted traffic shifting Able to test without any new traffic hitting the production environment Automated rollbacks and promotions Manual judgment Customizable metric queries and analysis of business key performance indicators (KPIs) Integration with ingress controller and Red Hat OpenShift Service Mesh for advanced traffic routing Integration with metric providers for deployment strategy analysis Usage of multiple providers With Argo Rollouts, users can more easily adopt progressive delivery in end-user environments. This provides structure and guidelines without requiring teams to learn about traffic managers and complex infrastructure. With automated rollouts, the Red Hat OpenShift GitOps Operator provides security to your end-user environments and helps manage the resources, cost, and time effectively. Existing users who use Argo CD with security and automated deployments get feedback early in the process and avoid problems that impact them. 1.3. About RolloutManager custom resources and specification To use Argo Rollouts, you must install Red Hat OpenShift GitOps Operator on the cluster, and then create and submit a RolloutManager custom resource (CR) to the Operator in the namespace of your choice. You can scope the RolloutManager CR for single or multiple namespaces. The Operator creates an argo-rollouts instance with the following namespace-scoped supporting resources: Argo Rollouts controller Argo Rollouts metrics service Argo Rollouts service account Argo Rollouts roles Argo Rollouts role bindings Argo Rollouts secret You can specify the command arguments, environment variables, a custom image name, and so on for the Argo Rollouts controller resource in the spec of the RolloutsManager CR. The RolloutManager CR spec defines the desired state of Argo Rollouts. Example: RolloutManager CR apiVersion: argoproj.io/v1alpha1 kind: RolloutManager metadata: name: argo-rollout labels: example: basic spec: {} 1.3.1. Argo Rollouts controller With the Argo Rollouts controller resource, you can manage the progressive application delivery in your namespace. The Argo Rollouts controller resource monitors the cluster for events, and reacts whenever there is a change in any resource related to Argo Rollouts. The controller reads all the rollout details and brings the cluster to the same state as described in the rollout definition. 1.4. Creating a RolloutManager custom resource To manage progressive delivery of deployments by using Argo Rollouts in Red Hat OpenShift GitOps, you must create and configure a RolloutManager custom resource (CR) in the namespace of your choice. By default, any new argo-rollouts instance has permission to manage resources only in the namespace where it is deployed, but you can use Argo Rollouts in multiple namespaces as required. Prerequisites Red Hat OpenShift GitOps 1.9.0 or a newer version is installed on your cluster. Procedure Log in to the OpenShift Container Platform web console as a cluster administrator. In the Administrator perspective, click Operators Installed Operators . Create or select the project where you want to create and configure a RolloutManager custom resource (CR) from the Project drop-down menu. Select OpenShift GitOps Operator from the installed operators. In the Details tab, under the Provided APIs section, click Create instance in the RolloutManager pane. On the Create RolloutManager page, select the YAML view and use the default YAML or edit it according to your requirements: Example: RolloutManager CR apiVersion: argoproj.io/v1alpha1 kind: RolloutManager metadata: name: argo-rollout labels: example: basic spec: {} Click Create . In the RolloutManager tab, under the RolloutManagers section, verify that the Status field of the RolloutManager instance shows as Phase: Available . In the left navigation pane, verify the creation of the namespace-scoped supporting resources: Click Workloads Deployments to verify that the argo-rollouts deployment is available with the Status showing as 1 of 1 pods running. Click Workloads Secrets to verify that the argo-rollouts-notification-secret secret is available. Click Networking Services to verify that the argo-rollouts-metrics service is available. Click User Management Roles to verify that the argo-rollouts role and argo-rollouts-aggregate-to-admin , argo-rollouts-aggregate-to-edit , and argo-rollouts-aggregate-to-view cluster roles are available. Click User Management RoleBindings to verify that the argo-rollouts role binding is available. 1.5. Deleting a RolloutManager custom resource Uninstalling the Red Hat OpenShift GitOps Operator does not remove the resources that were created during installation. You must manually delete the RolloutManager custom resource (CR) before you uninstall the Red Hat OpenShift GitOps Operator. Prerequisites Red Hat OpenShift GitOps 1.9.0 or a newer version is installed on your cluster. A RolloutManager CR exists in your namespace. Procedure Log in to the OpenShift Container Platform web console as a cluster administrator. In the Administrator perspective, click Operators Installed Operators . Click the Project drop-down menu and select the project that contains the RolloutManager CR. Select OpenShift GitOps Operator from the installed operators. Click the RolloutManager tab to find RolloutManager instances under the RolloutManagers section. Click the instance. Click Actions Delete RolloutManager from the drop-down menu, and click Delete to confirm in the dialog box. In the RolloutManager tab, under the RolloutManagers section, verify that the RolloutManager instance is not available anymore. In the left navigation pane, verify the deletion of the namespace-scoped supporting resources: Click Workloads Deployments to verify that the argo-rollouts deployment is deleted. Click Workloads Secrets to verify that the argo-rollouts-notification-secret secret is deleted. Click Networking Services to verify that the argo-rollouts-metrics service is deleted. Click User Management Roles to verify that the argo-rollouts role and argo-rollouts-aggregate-to-admin , argo-rollouts-aggregate-to-edit , and argo-rollouts-aggregate-to-view cluster roles are deleted. Click User Management RoleBindings to verify that the argo-rollouts role binding is deleted. 1.6. Additional resources Installing Red Hat OpenShift GitOps Uninstalling Red Hat OpenShift GitOps Canary deployments Blue-green deployments RolloutManager Custom Resource specification Blue-green and canary deployments with Argo Rollouts Argo Rollouts tech preview limitations
[ "apiVersion: argoproj.io/v1alpha1 kind: RolloutManager metadata: name: argo-rollout labels: example: basic spec: {}", "apiVersion: argoproj.io/v1alpha1 kind: RolloutManager metadata: name: argo-rollout labels: example: basic spec: {}" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_gitops/1.12/html/argo_rollouts/using-argo-rollouts-for-progressive-deployment-delivery
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/users_and_identity_management_guide/making-open-source-more-inclusive
Chapter 15. obr
Chapter 15. obr 15.1. obr:deploy 15.1.1. Description Deploys a list of bundles using OBR service. 15.1.2. Syntax obr:deploy [options] bundles 15.1.3. Arguments Name Description bundles List of bundle names to deploy (separated by whitespaces). The bundles are identified using the following syntax: symbolic_name,version where version is optional. 15.1.4. Options Name Description --help Display this help message -d, --deployOptional Deploy optional bundles -s, --start Start the deployed bundles 15.2. obr:find 15.2.1. Description Find OBR bundles for a given filter. 15.2.2. Syntax obr:find [options] requirements 15.2.3. Arguments Name Description requirements Requirement 15.2.4. Options Name Description --help Display this help message 15.3. obr:info 15.3.1. Description Prints information about OBR bundles. 15.3.2. Syntax obr:info [options] bundles 15.3.3. Arguments Name Description bundles Specify bundles to query for information (separated by whitespaces). The bundles are identified using the following syntax: symbolic_name,version where version is optional. 15.3.4. Options Name Description --help Display this help message 15.4. obr:list 15.4.1. Description Lists OBR bundles, optionally providing the given packages. 15.4.2. Syntax obr:list [options] [packages] 15.4.3. Arguments Name Description packages A list of packages separated by whitespaces. 15.4.4. Options Name Description --help Display this help message --no-format Disable table rendered output 15.5. obr:resolve 15.5.1. Description Shows the resolution output for a given set of requirements. 15.5.2. Syntax obr:resolve [options] requirements 15.5.3. Arguments Name Description requirements Requirements 15.5.4. Options Name Description -w, --why Display the reason of the inclusion of the resource --help Display this help message --no-remote Ignore remote resources during resolution -l, --no-local Ignore local resources during resolution --deploy Deploy the selected bundles --optional Resolve optional dependencies --start Deploy and start the selected bundles 15.6. obr:source 15.6.1. Description Downloads the sources for an OBR bundle. 15.6.2. Syntax obr:source [options] folder bundles 15.6.3. Arguments Name Description folder Local folder for storing sources bundles List of bundles to download the sources for. The bundles are identified using the following syntax: symbolic_name,version where version is optional. 15.6.4. Options Name Description --help Display this help message -x Extract the archive 15.7. obr:start 15.7.1. Description Deploys and starts a list of bundles using OBR. 15.7.2. Syntax obr:start [options] bundles 15.7.3. Arguments Name Description bundles List of bundles to deploy (separated by whitespaces). The bundles are identified using the following syntax: symbolic_name,version where version is optional. 15.7.4. Options Name Description --help Display this help message -d, --deployOptional Deploy optional bundles 15.8. obr:url-add 15.8.1. Description Adds a list of repository URLs to the OBR service. 15.8.2. Syntax obr:url-add [options] urls 15.8.3. Arguments Name Description urls Repository URLs to add to the OBR service separated by whitespaces 15.8.4. Options Name Description --help Display this help message 15.9. obr:url-list 15.9.1. Description Displays the repository URLs currently associated with the OBR service. 15.9.2. Syntax obr:url-list [options] 15.9.3. Options Name Description --help Display this help message --no-format Disable table rendered output 15.10. obr:url-refresh 15.10.1. Description Reloads the repositories to obtain a fresh list of bundles. 15.10.2. Syntax obr:url-refresh [options] [ids] 15.10.3. Arguments Name Description ids Repository URLs (or indexes if you use -i) to refresh (leave empty for all) 15.10.4. Options Name Description -i, --index Use index to identify URL --help Display this help message 15.11. obr:url-remove 15.11.1. Description Removes a list of repository URLs from the OBR service. 15.11.2. Syntax obr:url-remove [options] ids 15.11.3. Arguments Name Description ids Repository URLs (or indexes if you use -i) to remove from OBR service 15.11.4. Options Name Description -i, --index Use index to identify URL --help Display this help message
null
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_karaf_console_reference/obr
Chapter 13. Red Hat Process Automation Manager Kogito Operator interaction with Prometheus and Grafana
Chapter 13. Red Hat Process Automation Manager Kogito Operator interaction with Prometheus and Grafana Red Hat build of Kogito in Red Hat Decision Manager provides a monitoring-prometheus-addon add-on that enables Prometheus metrics monitoring for Red Hat build of Kogito microservices and generates Grafana dashboards that consume the default metrics exported by the add-on. The RHPAM Kogito Operator uses the Prometheus Operator to expose the metrics from your project for Prometheus to scrape. Due to this dependency, the Prometheus Operator must be installed in the same namespace as your project. If you want to enable the Prometheus metrics monitoring for your Red Hat build of Kogito microservices, add the following dependency to the pom.xml file in your project, depending on the framework you are using: Dependency for Prometheus Red Hat build of Quarkus add-on <dependency> <groupId>org.kie.kogito</groupId> <artifactId>monitoring-prometheus-quarkus-addon</artifactId> </dependency> Dependency for Prometheus Spring Boot add-on <dependency> <groupId>org.kie.kogito</groupId> <artifactId>monitoring-prometheus-springboot-addon</artifactId> </dependency> When you deploy a Red Hat build of Kogito microservice that uses the monitoring-prometheus-addon add-on and the Prometheus Operator is installed, the Red Hat Process Automation Manager Kogito Operator creates a ServiceMonitor custom resource to expose the metrics for Prometheus, as shown in the following example: Example ServiceMonitor resource for Prometheus apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: app: onboarding-service name: onboarding-service namespace: kogito spec: endpoints: - path: /metrics targetPort: 8080 scheme: http namespaceSelector: matchNames: - kogito selector: matchLabels: app: onboarding-service You must manually configure your Prometheus custom resource that is managed by the Prometheus Operator to select the ServiceMonitor resource: Example Prometheus resource apiVersion: monitoring.coreos.com/v1 kind: Prometheus metadata: name: prometheus spec: serviceAccountName: prometheus serviceMonitorSelector: matchLabels: app: dmn-drools-quarkus-metrics-service After you configure your Prometheus resource with the ServiceMonitor resource, you can see the endpoint scraped by Prometheus in the Targets page in the Prometheus web console. The metrics exposed by the Red Hat Decision Manager service appear in the Graph view. The RHPAM Kogito Operator also creates a GrafanaDashboard custom resource defined by the Grafana Operator for each of the Grafana dashboards generated by the add-on. The app label for the dashboards is the name of the deployed Red Hat build of Kogito microservice. You must set the dashboardLabelSelector property of the Grafana custom resource according to the relevant Red Hat build of Kogito microservice. Example Grafana resource apiVersion: integreatly.org/v1alpha1 kind: Grafana metadata: name: example-grafana spec: ingress: enabled: true config: auth: disable_signout_menu: true auth.anonymous: enabled: true log: level: warn mode: console security: admin_password: secret admin_user: root dashboardLabelSelector: - matchExpressions: - key: app operator: In values: - my-kogito-application
[ "<dependency> <groupId>org.kie.kogito</groupId> <artifactId>monitoring-prometheus-quarkus-addon</artifactId> </dependency>", "<dependency> <groupId>org.kie.kogito</groupId> <artifactId>monitoring-prometheus-springboot-addon</artifactId> </dependency>", "apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: app: onboarding-service name: onboarding-service namespace: kogito spec: endpoints: - path: /metrics targetPort: 8080 scheme: http namespaceSelector: matchNames: - kogito selector: matchLabels: app: onboarding-service", "apiVersion: monitoring.coreos.com/v1 kind: Prometheus metadata: name: prometheus spec: serviceAccountName: prometheus serviceMonitorSelector: matchLabels: app: dmn-drools-quarkus-metrics-service", "apiVersion: integreatly.org/v1alpha1 kind: Grafana metadata: name: example-grafana spec: ingress: enabled: true config: auth: disable_signout_menu: true auth.anonymous: enabled: true log: level: warn mode: console security: admin_password: secret admin_user: root dashboardLabelSelector: - matchExpressions: - key: app operator: In values: - my-kogito-application" ]
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/getting_started_with_red_hat_build_of_kogito_in_red_hat_decision_manager/con-rhpam-kogito-operator-with-prometheus-and-grafana_deploying-kogito-microservices-on-openshift
Chapter 66. CoAP Component
Chapter 66. CoAP Component Available as of Camel version 2.16 Camel-CoAP is an Apache Camel component that allows you to work with CoAP, a lightweight REST-type protocol for machine-to-machine operation. CoAP , Constrained Application Protocol is a specialized web transfer protocol for use with constrained nodes and constrained networks and it is based on RFC 7252. Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-coap</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> 66.1. Options The CoAP component has no options. The CoAP endpoint is configured using URI syntax: with the following path and query parameters: 66.1.1. Path Parameters (1 parameters): Name Description Default Type uri The URI for the CoAP endpoint URI 66.1.2. Query Parameters (5 parameters): Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean coapMethodRestrict (consumer) Comma separated list of methods that the CoAP consumer will bind to. The default is to bind to all methods (DELETE, GET, POST, PUT). String exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean 66.2. Spring Boot Auto-Configuration The component supports 2 options, which are listed below. Name Description Default Type camel.component.coap.enabled Enable coap component true Boolean camel.component.coap.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean 66.3. Message Headers Name Type Description CamelCoapMethod String The request method that the CoAP producer should use when calling the target CoAP server URI. Valid options are DELETE, GET, PING, POST & PUT. CamelCoapResponseCode String The CoAP response code sent by the external server. See RFC 7252 for details of what each code means. CamelCoapUri String The URI of a CoAP server to call. Will override any existing URI configured directly on the endpoint. 66.3.1. Configuring the CoAP producer request method The following rules determine which request method the CoAP producer will use to invoke the target URI: The value of the CamelCoapMethod header GET if a query string is provided on the target CoAP server URI. POST if the message exchange body is not null. GET otherwise.
[ "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-coap</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>", "coap:uri" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/coap-component
19.3. How Kerberos Works
19.3. How Kerberos Works Kerberos differs from username/password authentication methods because instead of authenticating each user to each network service, it uses symmetric encryption and a trusted third party, a KDC, to authenticate users to a suite of network services. Once a user authenticates to the KDC, it sends a ticket specific to that session back the user's machine and any kerberized services look for the ticket on the user's machine rather than asking the user to authenticate using a password. When a user on a kerberized network logs in to their workstation, their principal is sent to the KDC in a request for a TGT from AS. This request can be sent by the login program so that it is transparent to the user or can be sent by the kinit program after the user logs in. The KDC checks for the principal in its database. If the principal is found, the KDC creates a TGT, which is encrypted using the user's key and returned to that user. The login or kinit program on the client machine then decrypts the TGT using the user's key (which it computes from the user's password). The user's key is used only on the client machine and is not sent over the network. The TGT is set to expire after a certain period of time (usually ten hours) and stored in the client machine's credentials cache. An expiration time is set so that a compromised TGT is of use to an attacker for only a short period of time. Once the TGT is issued, the user does not have to re-enter their password until the TGT expires or they logout and login again. Whenever the user needs access to a network service, the client software uses the TGT to request a new ticket for that specific service from the TGS. The service ticket is then used to authenticate the user to that service transparently. Warning The Kerberos system can be compromised any time any user on the network authenticates against a non-kerberized service by sending a password in plain text. Use of non-kerberized services is discouraged. Such services include Telnet and FTP. Use of other encrypted protocols, such as SSH or SSL secured services, however, is acceptable, though not ideal. This is only a broad overview of how Kerberos authentication works. Those seeking a more in-depth look at Kerberos authentication should refer to Section 19.7, "Additional Resources" . Note Kerberos depends on certain network services to work correctly. First, Kerberos requires approximate clock synchronization between the machines on the network. Therefore, a clock synchronization program should be set up for the network, such as ntpd . For more about configuring ntpd , refer to /usr/share/doc/ntp- <version-number> /index.htm for details on setting up Network Time Protocol servers (replace <version-number> with the version number of the ntp package installed on the system). Also, since certain aspects of Kerberos rely on the Domain Name Service (DNS), be sure that the DNS entries and hosts on the network are all properly configured. Refer to the Kerberos V5 System Administrator's Guide , provided in PostScript and HTML formats in /usr/share/doc/krb5-server- <version-number> for more information (replace <version-number> with the version number of the krb5-server package installed on the system).
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s1-kerberos-works
Chapter 2. Preparing Red Hat Enterprise Linux for a Red Hat Quay proof of concept deployment
Chapter 2. Preparing Red Hat Enterprise Linux for a Red Hat Quay proof of concept deployment Use the following procedures to configure Red Hat Enterprise Linux (RHEL) for a Red Hat Quay proof of concept deployment. 2.1. Install and register the RHEL server Use the following procedure to configure the Red Hat Enterprise Linux (RHEL) server for a Red Hat Quay proof of concept deployment. Procedure Install the latest RHEL 9 server. You can do a minimal, shell-access only install, or Server plus GUI if you want a desktop. Register and subscribe your RHEL server system as described in How to register and subscribe a RHEL system to the Red Hat Customer Portal using Red Hat Subscription-Manager Enter the following commands to register your system and list available subscriptions. Choose an available RHEL server subscription, attach to its pool ID, and upgrade to the latest software: # subscription-manager register --username=<user_name> --password=<password> # subscription-manager refresh # subscription-manager list --available # subscription-manager attach --pool=<pool_id> # yum update -y 2.2. Registry authentication Use the following procedure to authenticate your registry for a Red Hat Quay proof of concept. Procedure Set up authentication to registry.redhat.io by following the Red Hat Container Registry Authentication procedure. Setting up authentication allows you to pull the Quay container. Note This differs from earlier versions of Red Hat Quay, when the images were hosted on Quay.io. Enter the following command to log in to the registry: USD sudo podman login registry.redhat.io You are prompted to enter your username and password . 2.3. Firewall configuration If you have a firewall running on your system, you might have to add rules that allow access to Red Hat Quay. Use the following procedure to configure your firewall for a proof of concept deployment. Procedure The commands required depend on the ports that you have mapped on your system, for example: # firewall-cmd --permanent --add-port=80/tcp \ && firewall-cmd --permanent --add-port=443/tcp \ && firewall-cmd --permanent --add-port=5432/tcp \ && firewall-cmd --permanent --add-port=5433/tcp \ && firewall-cmd --permanent --add-port=6379/tcp \ && firewall-cmd --reload 2.4. IP addressing and naming services There are several ways to configure the component containers in Red Hat Quay so that they can communicate with each other, for example: Using a naming service . If you want your deployment to survive container restarts, which typically result in changed IP addresses, you can implement a naming service. For example, the dnsname plugin is used to allow containers to resolve each other by name. Using the host network . You can use the podman run command with the --net=host option and then use container ports on the host when specifying the addresses in the configuration. This option is susceptible to port conflicts when two containers want to use the same port. This method is not recommended. Configuring port mapping . You can use port mappings to expose ports on the host and then use these ports in combination with the host IP address or host name. This document uses port mapping and assumes a static IP address for your host system. Table 2.1. Sample proof of concept port mapping Component Port mapping Address Quay -p 80:8080 -p 443:8443 http://quay-server.example.com Postgres for Quay -p 5432:5432 quay-server.example.com:5432 Redis -p 6379:6379 quay-server.example.com:6379 Postgres for Clair V4 -p 5433:5432 quay-server.example.com:5433 Clair V4 -p 8081:8080 http://quay-server.example.com:8081
[ "subscription-manager register --username=<user_name> --password=<password> subscription-manager refresh subscription-manager list --available subscription-manager attach --pool=<pool_id> yum update -y", "sudo podman login registry.redhat.io", "firewall-cmd --permanent --add-port=80/tcp && firewall-cmd --permanent --add-port=443/tcp && firewall-cmd --permanent --add-port=5432/tcp && firewall-cmd --permanent --add-port=5433/tcp && firewall-cmd --permanent --add-port=6379/tcp && firewall-cmd --reload" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3/html/proof_of_concept_-_deploying_red_hat_quay/poc-configuring-rhel-server
Chapter 10. Distributed tracing
Chapter 10. Distributed tracing Distributed tracing allows you to track the progress of transactions between applications in a distributed system. In a microservices architecture, tracing tracks the progress of transactions between services. Trace data is useful for monitoring application performance and investigating issues with target systems and end-user applications. In AMQ Streams, tracing facilitates the end-to-end tracking of messages: from source systems to Kafka, and then from Kafka to target systems and applications. It complements the metrics that are available to view in Grafana dashboards , as well as the component loggers. How AMQ Streams supports tracing Support for tracing is built in to the following components: Kafka Connect (including Kafka Connect with Source2Image support) MirrorMaker MirrorMaker 2.0 AMQ Streams Kafka Bridge You enable and configure tracing for these components using template configuration properties in their custom resources. To enable tracing in Kafka producers, consumers, and Kafka Streams API applications, you instrument application code using the OpenTracing Apache Kafka Client Instrumentation library (included with AMQ Streams). When instrumented, clients generate trace data; for example, when producing messages or writing offsets to the log. Traces are sampled according to a sampling strategy and then visualized in the Jaeger user interface. Note Tracing is not supported for Kafka brokers. Setting up tracing for applications and systems beyond AMQ Streams is outside the scope of this chapter. To learn more about this subject, search for "inject and extract" in the OpenTracing documentation . Outline of procedures To set up tracing for AMQ Streams, follow these procedures in order: Set up tracing for clients: Initialize a Jaeger tracer for Kafka clients Instrument clients with tracers: Instrument producers and consumers for tracing Instrument Kafka Streams applications for tracing Set up tracing for MirrorMaker, Kafka Connect, and the Kafka Bridge Prerequisites The Jaeger backend components are deployed to your OpenShift cluster. For deployment instructions, see the Jaeger deployment documentation . 10.1. Overview of OpenTracing and Jaeger AMQ Streams uses the OpenTracing and Jaeger projects. OpenTracing is an API specification that is independent from the tracing or monitoring system. The OpenTracing APIs are used to instrument application code Instrumented applications generate traces for individual transactions across the distributed system Traces are composed of spans that define specific units of work over time Jaeger is a tracing system for microservices-based distributed systems. Jaeger implements the OpenTracing APIs and provides client libraries for instrumentation The Jaeger user interface allows you to query, filter, and analyze trace data Additional resources OpenTracing Jaeger 10.2. Setting up tracing for Kafka clients Initialize a Jaeger tracer to instrument your client applications for distributed tracing. 10.2.1. Initializing a Jaeger tracer for Kafka clients Configure and initialize a Jaeger tracer using a set of tracing environment variables . Procedure In each client application: Add Maven dependencies for Jaeger to the pom.xml file for the client application: <dependency> <groupId>io.jaegertracing</groupId> <artifactId>jaeger-client</artifactId> <version>1.1.0.redhat-00002</version> </dependency> Define the configuration of the Jaeger tracer using the tracing environment variables . Create the Jaeger tracer from the environment variables that you defined in step two: Tracer tracer = Configuration.fromEnv().getTracer(); Note For alternative ways to initialize a Jaeger tracer, see the Java OpenTracing library documentation. Register the Jaeger tracer as a global tracer: GlobalTracer.register(tracer); A Jaeger tracer is now initialized for the client application to use. 10.2.2. Environment variables for tracing Use these environment variables when configuring a Jaeger tracer for Kafka clients. Note The tracing environment variables are part of the Jaeger project and are subject to change. For the latest environment variables, see the Jaeger documentation . Property Required Description JAEGER_SERVICE_NAME Yes The name of the Jaeger tracer service. JAEGER_AGENT_HOST No The hostname for communicating with the jaeger-agent through the User Datagram Protocol (UDP). JAEGER_AGENT_PORT No The port used for communicating with the jaeger-agent through UDP. JAEGER_ENDPOINT No The traces endpoint. Only define this variable if the client application will bypass the jaeger-agent and connect directly to the jaeger-collector . JAEGER_AUTH_TOKEN No The authentication token to send to the endpoint as a bearer token. JAEGER_USER No The username to send to the endpoint if using basic authentication. JAEGER_PASSWORD No The password to send to the endpoint if using basic authentication. JAEGER_PROPAGATION No A comma-separated list of formats to use for propagating the trace context. Defaults to the standard Jaeger format. Valid values are jaeger , b3 , and w3c . JAEGER_REPORTER_LOG_SPANS No Indicates whether the reporter should also log the spans. JAEGER_REPORTER_MAX_QUEUE_SIZE No The reporter's maximum queue size. JAEGER_REPORTER_FLUSH_INTERVAL No The reporter's flush interval, in ms. Defines how frequently the Jaeger reporter flushes span batches. JAEGER_SAMPLER_TYPE No The sampling strategy to use for client traces: Constant Probabilistic Rate Limiting Remote (the default) To sample all traces, use the Constant sampling strategy with a parameter of 1. For more information, see the Jaeger documentation . JAEGER_SAMPLER_PARAM No The sampler parameter (number). JAEGER_SAMPLER_MANAGER_HOST_PORT No The hostname and port to use if a Remote sampling strategy is selected. JAEGER_TAGS No A comma-separated list of tracer-level tags that are added to all reported spans. The value can also refer to an environment variable using the format USD{envVarName:default} . :default is optional and identifies a value to use if the environment variable cannot be found. Additional resources Section 10.2.1, "Initializing a Jaeger tracer for Kafka clients" 10.3. Instrumenting Kafka clients with tracers Instrument Kafka producer and consumer clients, and Kafka Streams API applications for distributed tracing. 10.3.1. Instrumenting producers and consumers for tracing Use a Decorator pattern or Interceptors to instrument your Java producer and consumer application code for tracing. Procedure In the application code of each producer and consumer application: Add the Maven dependency for OpenTracing to the producer or consumer's pom.xml file. <dependency> <groupId>io.opentracing.contrib</groupId> <artifactId>opentracing-kafka-client</artifactId> <version>0.1.15.redhat-00001</version> </dependency> Instrument your client application code using either a Decorator pattern or Interceptors. To use a Decorator pattern: // Create an instance of the KafkaProducer: KafkaProducer<Integer, String> producer = new KafkaProducer<>(senderProps); // Create an instance of the TracingKafkaProducer: TracingKafkaProducer<Integer, String> tracingProducer = new TracingKafkaProducer<>(producer, tracer); // Send: tracingProducer.send(...); // Create an instance of the KafkaConsumer: KafkaConsumer<Integer, String> consumer = new KafkaConsumer<>(consumerProps); // Create an instance of the TracingKafkaConsumer: TracingKafkaConsumer<Integer, String> tracingConsumer = new TracingKafkaConsumer<>(consumer, tracer); // Subscribe: tracingConsumer.subscribe(Collections.singletonList("messages")); // Get messages: ConsumerRecords<Integer, String> records = tracingConsumer.poll(1000); // Retrieve SpanContext from polled record (consumer side): ConsumerRecord<Integer, String> record = ... SpanContext spanContext = TracingKafkaUtils.extractSpanContext(record.headers(), tracer); To use Interceptors: // Register the tracer with GlobalTracer: GlobalTracer.register(tracer); // Add the TracingProducerInterceptor to the sender properties: senderProps.put(ProducerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingProducerInterceptor.class.getName()); // Create an instance of the KafkaProducer: KafkaProducer<Integer, String> producer = new KafkaProducer<>(senderProps); // Send: producer.send(...); // Add the TracingConsumerInterceptor to the consumer properties: consumerProps.put(ConsumerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingConsumerInterceptor.class.getName()); // Create an instance of the KafkaConsumer: KafkaConsumer<Integer, String> consumer = new KafkaConsumer<>(consumerProps); // Subscribe: consumer.subscribe(Collections.singletonList("messages")); // Get messages: ConsumerRecords<Integer, String> records = consumer.poll(1000); // Retrieve the SpanContext from a polled message (consumer side): ConsumerRecord<Integer, String> record = ... SpanContext spanContext = TracingKafkaUtils.extractSpanContext(record.headers(), tracer); 10.3.1.1. Custom span names in a Decorator pattern A span is a logical unit of work in Jaeger, with an operation name, start time, and duration. To use a Decorator pattern to instrument your producer and consumer applications, define custom span names by passing a BiFunction object as an additional argument when creating the TracingKafkaProducer and TracingKafkaConsumer objects. The OpenTracing Apache Kafka Client Instrumentation library includes several built-in span names. Example: Using custom span names to instrument client application code in a Decorator pattern // Create a BiFunction for the KafkaProducer that operates on (String operationName, ProducerRecord consumerRecord) and returns a String to be used as the name: BiFunction<String, ProducerRecord, String> producerSpanNameProvider = (operationName, producerRecord) -> "CUSTOM_PRODUCER_NAME"; // Create an instance of the KafkaProducer: KafkaProducer<Integer, String> producer = new KafkaProducer<>(senderProps); // Create an instance of the TracingKafkaProducer TracingKafkaProducer<Integer, String> tracingProducer = new TracingKafkaProducer<>(producer, tracer, producerSpanNameProvider); // Spans created by the tracingProducer will now have "CUSTOM_PRODUCER_NAME" as the span name. // Create a BiFunction for the KafkaConsumer that operates on (String operationName, ConsumerRecord consumerRecord) and returns a String to be used as the name: BiFunction<String, ConsumerRecord, String> consumerSpanNameProvider = (operationName, consumerRecord) -> operationName.toUpperCase(); // Create an instance of the KafkaConsumer: KafkaConsumer<Integer, String> consumer = new KafkaConsumer<>(consumerProps); // Create an instance of the TracingKafkaConsumer, passing in the consumerSpanNameProvider BiFunction: TracingKafkaConsumer<Integer, String> tracingConsumer = new TracingKafkaConsumer<>(consumer, tracer, consumerSpanNameProvider); // Spans created by the tracingConsumer will have the operation name as the span name, in upper-case. // "receive" -> "RECEIVE" 10.3.1.2. Built-in span names When defining custom span names, you can use the following BiFunctions in the ClientSpanNameProvider class. If no spanNameProvider is specified, CONSUMER_OPERATION_NAME and PRODUCER_OPERATION_NAME are used. BiFunction Description CONSUMER_OPERATION_NAME, PRODUCER_OPERATION_NAME Returns the operationName as the span name: "receive" for consumers and "send" for producers. CONSUMER_PREFIXED_OPERATION_NAME(String prefix), PRODUCER_PREFIXED_OPERATION_NAME(String prefix) Returns a String concatenation of prefix and operationName . CONSUMER_TOPIC, PRODUCER_TOPIC Returns the name of the topic that the message was sent to or retrieved from in the format (record.topic()) . PREFIXED_CONSUMER_TOPIC(String prefix), PREFIXED_PRODUCER_TOPIC(String prefix) Returns a String concatenation of prefix and the topic name in the format (record.topic()) . CONSUMER_OPERATION_NAME_TOPIC, PRODUCER_OPERATION_NAME_TOPIC Returns the operation name and the topic name: "operationName - record.topic()" . CONSUMER_PREFIXED_OPERATION_NAME_TOPIC(String prefix), PRODUCER_PREFIXED_OPERATION_NAME_TOPIC(String prefix) Returns a String concatenation of prefix and "operationName - record.topic()" . 10.3.2. Instrumenting Kafka Streams applications for tracing This section describes how to instrument Kafka Streams API applications for distributed tracing. Procedure In each Kafka Streams API application: Add the opentracing-kafka-streams dependency to the pom.xml file for your Kafka Streams API application: <dependency> <groupId>io.opentracing.contrib</groupId> <artifactId>opentracing-kafka-streams</artifactId> <version>0.1.15.redhat-00001</version> </dependency> Create an instance of the TracingKafkaClientSupplier supplier interface: KafkaClientSupplier supplier = new TracingKafkaClientSupplier(tracer); Provide the supplier interface to KafkaStreams : KafkaStreams streams = new KafkaStreams(builder.build(), new StreamsConfig(config), supplier); streams.start(); 10.4. Setting up tracing for MirrorMaker, Kafka Connect, and the Kafka Bridge Distributed tracing is supported for MirrorMaker, MirrorMaker 2.0, Kafka Connect (including Kafka Connect with Source2Image support), and the AMQ Streams Kafka Bridge. Tracing in MirrorMaker and MirrorMaker 2.0 For MirrorMaker and MirrorMaker 2.0, messages are traced from the source cluster to the target cluster. The trace data records messages entering and leaving the MirrorMaker or MirrorMaker 2.0 component. Tracing in Kafka Connect Only messages produced and consumed by Kafka Connect itself are traced. To trace messages sent between Kafka Connect and external systems, you must configure tracing in the connectors for those systems. For more information, see Section 2.2.1, "Configuring Kafka Connect" . Tracing in the Kafka Bridge Messages produced and consumed by the Kafka Bridge are traced. Incoming HTTP requests from client applications to send and receive messages through the Kafka Bridge are also traced. To have end-to-end tracing, you must configure tracing in your HTTP clients. 10.4.1. Enabling tracing in MirrorMaker, Kafka Connect, and Kafka Bridge resources Update the configuration of KafkaMirrorMaker , KafkaMirrorMaker2 , KafkaConnect , KafkaConnectS2I , and KafkaBridge custom resources to specify and configure a Jaeger tracer service for each resource. Updating a tracing-enabled resource in your OpenShift cluster triggers two events: Interceptor classes are updated in the integrated consumers and producers in MirrorMaker, MirrorMaker 2.0, Kafka Connect, or the AMQ Streams Kafka Bridge. For MirrorMaker, MirrorMaker 2.0, and Kafka Connect, the tracing agent initializes a Jaeger tracer based on the tracing configuration defined in the resource. For the Kafka Bridge, a Jaeger tracer based on the tracing configuration defined in the resource is initialized by the Kafka Bridge itself. Procedure Perform these steps for each KafkaMirrorMaker , KafkaMirrorMaker2 , KafkaConnect , KafkaConnectS2I , and KafkaBridge resource. In the spec.template property, configure the Jaeger tracer service. For example: Jaeger tracer configuration for Kafka Connect apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: #... template: connectContainer: 1 env: - name: JAEGER_SERVICE_NAME value: my-jaeger-service - name: JAEGER_AGENT_HOST value: jaeger-agent-name - name: JAEGER_AGENT_PORT value: "6831" tracing: 2 type: jaeger #... Jaeger tracer configuration for MirrorMaker apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker metadata: name: my-mirror-maker spec: #... template: mirrorMakerContainer: env: - name: JAEGER_SERVICE_NAME value: my-jaeger-service - name: JAEGER_AGENT_HOST value: jaeger-agent-name - name: JAEGER_AGENT_PORT value: "6831" tracing: type: jaeger #... Jaeger tracer configuration for MirrorMaker 2.0 apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mm2-cluster spec: #... template: connectContainer: env: - name: JAEGER_SERVICE_NAME value: my-jaeger-service - name: JAEGER_AGENT_HOST value: jaeger-agent-name - name: JAEGER_AGENT_PORT value: "6831" tracing: type: jaeger #... Jaeger tracer configuration for the Kafka Bridge apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: #... template: bridgeContainer: env: - name: JAEGER_SERVICE_NAME value: my-jaeger-service - name: JAEGER_AGENT_HOST value: jaeger-agent-name - name: JAEGER_AGENT_PORT value: "6831" tracing: type: jaeger #... 1 Use the tracing environment variables as template configuration properties. 2 Set the spec.tracing.type property to jaeger . Create or update the resource: oc apply -f your-file Additional resources Section 13.2.61, " ContainerTemplate schema reference" Section 2.6, "Customizing OpenShift resources"
[ "<dependency> <groupId>io.jaegertracing</groupId> <artifactId>jaeger-client</artifactId> <version>1.1.0.redhat-00002</version> </dependency>", "Tracer tracer = Configuration.fromEnv().getTracer();", "GlobalTracer.register(tracer);", "<dependency> <groupId>io.opentracing.contrib</groupId> <artifactId>opentracing-kafka-client</artifactId> <version>0.1.15.redhat-00001</version> </dependency>", "// Create an instance of the KafkaProducer: KafkaProducer<Integer, String> producer = new KafkaProducer<>(senderProps); // Create an instance of the TracingKafkaProducer: TracingKafkaProducer<Integer, String> tracingProducer = new TracingKafkaProducer<>(producer, tracer); // Send: tracingProducer.send(...); // Create an instance of the KafkaConsumer: KafkaConsumer<Integer, String> consumer = new KafkaConsumer<>(consumerProps); // Create an instance of the TracingKafkaConsumer: TracingKafkaConsumer<Integer, String> tracingConsumer = new TracingKafkaConsumer<>(consumer, tracer); // Subscribe: tracingConsumer.subscribe(Collections.singletonList(\"messages\")); // Get messages: ConsumerRecords<Integer, String> records = tracingConsumer.poll(1000); // Retrieve SpanContext from polled record (consumer side): ConsumerRecord<Integer, String> record = SpanContext spanContext = TracingKafkaUtils.extractSpanContext(record.headers(), tracer);", "// Register the tracer with GlobalTracer: GlobalTracer.register(tracer); // Add the TracingProducerInterceptor to the sender properties: senderProps.put(ProducerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingProducerInterceptor.class.getName()); // Create an instance of the KafkaProducer: KafkaProducer<Integer, String> producer = new KafkaProducer<>(senderProps); // Send: producer.send(...); // Add the TracingConsumerInterceptor to the consumer properties: consumerProps.put(ConsumerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingConsumerInterceptor.class.getName()); // Create an instance of the KafkaConsumer: KafkaConsumer<Integer, String> consumer = new KafkaConsumer<>(consumerProps); // Subscribe: consumer.subscribe(Collections.singletonList(\"messages\")); // Get messages: ConsumerRecords<Integer, String> records = consumer.poll(1000); // Retrieve the SpanContext from a polled message (consumer side): ConsumerRecord<Integer, String> record = SpanContext spanContext = TracingKafkaUtils.extractSpanContext(record.headers(), tracer);", "// Create a BiFunction for the KafkaProducer that operates on (String operationName, ProducerRecord consumerRecord) and returns a String to be used as the name: BiFunction<String, ProducerRecord, String> producerSpanNameProvider = (operationName, producerRecord) -> \"CUSTOM_PRODUCER_NAME\"; // Create an instance of the KafkaProducer: KafkaProducer<Integer, String> producer = new KafkaProducer<>(senderProps); // Create an instance of the TracingKafkaProducer TracingKafkaProducer<Integer, String> tracingProducer = new TracingKafkaProducer<>(producer, tracer, producerSpanNameProvider); // Spans created by the tracingProducer will now have \"CUSTOM_PRODUCER_NAME\" as the span name. // Create a BiFunction for the KafkaConsumer that operates on (String operationName, ConsumerRecord consumerRecord) and returns a String to be used as the name: BiFunction<String, ConsumerRecord, String> consumerSpanNameProvider = (operationName, consumerRecord) -> operationName.toUpperCase(); // Create an instance of the KafkaConsumer: KafkaConsumer<Integer, String> consumer = new KafkaConsumer<>(consumerProps); // Create an instance of the TracingKafkaConsumer, passing in the consumerSpanNameProvider BiFunction: TracingKafkaConsumer<Integer, String> tracingConsumer = new TracingKafkaConsumer<>(consumer, tracer, consumerSpanNameProvider); // Spans created by the tracingConsumer will have the operation name as the span name, in upper-case. // \"receive\" -> \"RECEIVE\"", "<dependency> <groupId>io.opentracing.contrib</groupId> <artifactId>opentracing-kafka-streams</artifactId> <version>0.1.15.redhat-00001</version> </dependency>", "KafkaClientSupplier supplier = new TracingKafkaClientSupplier(tracer);", "KafkaStreams streams = new KafkaStreams(builder.build(), new StreamsConfig(config), supplier); streams.start();", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: # template: connectContainer: 1 env: - name: JAEGER_SERVICE_NAME value: my-jaeger-service - name: JAEGER_AGENT_HOST value: jaeger-agent-name - name: JAEGER_AGENT_PORT value: \"6831\" tracing: 2 type: jaeger #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker metadata: name: my-mirror-maker spec: # template: mirrorMakerContainer: env: - name: JAEGER_SERVICE_NAME value: my-jaeger-service - name: JAEGER_AGENT_HOST value: jaeger-agent-name - name: JAEGER_AGENT_PORT value: \"6831\" tracing: type: jaeger #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mm2-cluster spec: # template: connectContainer: env: - name: JAEGER_SERVICE_NAME value: my-jaeger-service - name: JAEGER_AGENT_HOST value: jaeger-agent-name - name: JAEGER_AGENT_PORT value: \"6831\" tracing: type: jaeger #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: # template: bridgeContainer: env: - name: JAEGER_SERVICE_NAME value: my-jaeger-service - name: JAEGER_AGENT_HOST value: jaeger-agent-name - name: JAEGER_AGENT_PORT value: \"6831\" tracing: type: jaeger #", "apply -f your-file" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q2/html/using_amq_streams_on_openshift/assembly-distributed-tracing-str
23.7. Setting the Hostname
23.7. Setting the Hostname Setup prompts you to supply a host name for this computer, either as a fully-qualified domain name (FQDN) in the format hostname . domainname or as a short host name in the format hostname . Many networks have a Dynamic Host Configuration Protocol (DHCP) service that automatically supplies connected systems with a domain name. To allow the DHCP service to assign the domain name to this machine, specify the short host name only. Note You may give your system any name provided that the full hostname is unique. The hostname may include letters, numbers and hyphens. Change the default setting localhost . localdomain to a unique hostname for each of your Linux instances. Figure 23.23. Setting the hostname 23.7.1. Editing Network Connections Note To change your network configuration after you have completed the installation, use the Network Administration Tool . Type the system-config-network command in a shell prompt to launch the Network Administration Tool . If you are not root, it prompts you for the root password to continue. The Network Administration Tool is now deprecated and will be replaced by NetworkManager during the lifetime of Red Hat Enterprise Linux 6. Usually, the network connection configured earlier in installation phase 1 does not need to be modified during the rest of the installation. You cannot add a new connection on System z because the network subchannels need to be grouped and set online beforehand, and this is currently only done in installation phase 1. To change the existing network connection, click the button Configure Network . The Network Connections dialog appears that allows you to configure network connections for the system, not all of which are relevant to System z. Figure 23.24. Network Connections All network connections on System z are listed in the Wired tab. By default this contains the connection configured earlier in installation phase 1 and is either eth0 (OSA, LCS), or hsi0 (HiperSockets). Note that on System z you cannot add a new connection here. To modify an existing connection, select a row in the list and click the Edit button. A dialog box appears with a set of tabs appropriate to wired connections, as described below. The most important tabs on System z are Wired and IPv4 Settings . When you have finished editing network settings, click Apply to save the new configuration. If you reconfigured a device that was already active during installation, you must restart the device to use the new configuration - refer to Section 9.7.1.6, "Restart a network device" . 23.7.1.1. Options common to all types of connection Certain configuration options are common to all connection types. Specify a name for the connection in the Connection name name field. Select Connect automatically to start the connection automatically when the system boots. When NetworkManager runs on an installed system, the Available to all users option controls whether a network configuration is available system-wide or not. During installation, ensure that Available to all users remains selected for any network interface that you configure. 23.7.1.2. The Wired tab Use the Wired tab to specify or change the media access control (MAC) address for the network adapter, and either set the maximum transmission unit (MTU, in bytes) that can pass through the interface. Figure 23.25. The Wired tab 23.7.1.3. The 802.1x Security tab Use the 802.1x Security tab to configure 802.1X port-based network access control (PNAC). Select Use 802.1X security for this connection to enable access control, then specify details of your network. The configuration options include: Authentication Choose one of the following methods of authentication: TLS for Transport Layer Security Tunneled TLS for Tunneled Transport Layer Security , otherwise known as TTLS, or EAP-TTLS Protected EAP (PEAP) for Protected Extensible Authentication Protocol Identity Provide the identity of this server. User certificate Browse to a personal X.509 certificate file encoded with Distinguished Encoding Rules (DER) or Privacy Enhanced Mail (PEM). CA certificate Browse to a X.509 certificate authority certificate file encoded with Distinguished Encoding Rules (DER) or Privacy Enhanced Mail (PEM). Private key Browse to a private key file encoded with Distinguished Encoding Rules (DER), Privacy Enhanced Mail (PEM), or the Personal Information Exchange Syntax Standard (PKCS#12). Private key password The password for the private key specified in the Private key field. Select Show password to make the password visible as you type it. Figure 23.26. The 802.1x Security tab 23.7.1.4. The IPv4 Settings tab Use the IPv4 Settings tab tab to configure the IPv4 parameters for the previously selected network connection. The address, netmask, gateway, DNS servers and DNS search suffix for an IPv4 connection were configured during installation phase 1 or reflect the following parameters in the parameter file or configuration file: IPADDR , NETMASK , GATEWAY , DNS , SEARCHDNS (Refer to Section 26.3, "Installation Network Parameters" ). Use the Method drop-down menu to specify which settings the system should attempt to obtain from a Dynamic Host Configuration Protocol (DHCP) service running on the network. Choose from the following options: Automatic (DHCP) IPv4 parameters are configured by the DHCP service on the network. Automatic (DHCP) addresses only The IPv4 address, netmask, and gateway address are configured by the DHCP service on the network, but DNS servers and search domains must be configured manually. Manual IPv4 parameters are configured manually for a static configuration. Link-Local Only A link-local address in the 169.254/16 range is assigned to the interface. Shared to other computers The system is configured to provide network access to other computers. The interface is assigned an address in the 10.42.x.1/24 range, a DHCP server and DNS server are started, and the interface is connected to the default network connection on the system with network address translation (NAT). Disabled IPv4 is disabled for this connection. If you selected a method that requires you to supply manual parameters, enter details of the IP address for this interface, the netmask, and the gateway in the Addresses field. Use the Add and Delete buttons to add or remove addresses. Enter a comma-separated list of DNS servers in the DNS servers field, and a comma-separated list of domains in the Search domains field for any domains that you want to include in name server lookups. Optionally, enter a name for this network connection in the DHCP client ID field. This name must be unique on the subnet. When you assign a meaningful DHCP client ID to a connection, it is easy to identify this connection when troubleshooting network problems. Deselect the Require IPv4 addressing for this connection to complete check box to allow the system to make this connection on an IPv6-enabled network if IPv4 configuration fails but IPv6 configuration succeeds. Figure 23.27. The IPv4 Settings tab 23.7.1.4.1. Editing IPv4 routes Red Hat Enterprise Linux configures a number of routes automatically based on the IP addresses of a device. To edit additional routes, click the Routes button. The Editing IPv4 routes dialog appears. Figure 23.28. The Editing IPv4 Routes dialog Click Add to add the IP address, netmask, gateway address, and metric for a new static route. Select Ignore automatically obtained routes to make the interface use only the routes specified for it here. Select Use this connection only for resources on its network to restrict connections only to the local network. 23.7.1.5. The IPv6 Settings tab Use the IPv6 Settings tab tab to configure the IPv6 parameters for the previously selected network connection. Use the Method drop-down menu to specify which settings the system should attempt to obtain from a Dynamic Host Configuration Protocol (DHCP) service running on the network. Choose from the following options: Ignore IPv6 is ignored for this connection. Automatic NetworkManager uses router advertisement (RA) to create an automatic, stateless configuration. Automatic, addresses only NetworkManager uses RA to create an automatic, stateless configuration, but DNS servers and search domains are ignored and must be configured manually. Automatic, DHCP only NetworkManager does not use RA, but requests information from DHCPv6 directly to create a stateful configuration. Manual IPv6 parameters are configured manually for a static configuration. Link-Local Only A link-local address with the fe80::/10 prefix is assigned to the interface. If you selected a method that requires you to supply manual parameters, enter details of the IP address for this interface, the netmask, and the gateway in the Addresses field. Use the Add and Delete buttons to add or remove addresses. Enter a comma-separated list of DNS servers in the DNS servers field, and a comma-separated list of domains in the Search domains field for any domains that you want to include in name server lookups. Optionally, enter a name for this network connection in the DHCP client ID field. This name must be unique on the subnet. When you assign a meaningful DHCP client ID to a connection, it is easy to identify this connection when troubleshooting network problems. Deselect the Require IPv6 addressing for this connection to complete check box to allow the system to make this connection on an IPv4-enabled network if IPv6 configuration fails but IPv4 configuration succeeds. Figure 23.29. The IPv6 Settings tab 23.7.1.5.1. Editing IPv6 routes Red Hat Enterprise Linux configures a number of routes automatically based on the IP addresses of a device. To edit additional routes, click the Routes button. The Editing IPv6 routes dialog appears. Figure 23.30. The Editing IPv6 Routes dialog Click Add to add the IP address, netmask, gateway address, and metric for a new static route. Select Use this connection only for resources on its network to restrict connections only to the local network. 23.7.1.6. Restart a network device If you reconfigured a network that was already in use during installation, you must disconnect and reconnect the device in anaconda for the changes to take effect. Anaconda uses interface configuration (ifcfg) files to communicate with NetworkManager . A device becomes disconnected when its ifcfg file is removed, and becomes reconnected when its ifcfg file is restored, as long as ONBOOT=yes is set. Refer to the Red Hat Enterprise Linux 6.9 Deployment Guide available from https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Deployment_Guide/index.html for more information about interface configuration files. Press Ctrl + Alt + F2 to switch to virtual terminal tty2 . Move the interface configuration file to a temporary location: where device_name is the device that you just reconfigured. For example, ifcfg-eth0 is the ifcfg file for eth0 . The device is now disconnected in anaconda . Open the interface configuration file in the vi editor: Verify that the interface configuration file contains the line ONBOOT=yes . If the file does not already contain the line, add it now and save the file. Exit the vi editor. Move the interface configuration file back to the /etc/sysconfig/network-scripts/ directory: The device is now reconnected in anaconda . Press Ctrl + Alt + F6 to return to anaconda .
[ "mv /etc/sysconfig/network-scripts/ifcfg- device_name /tmp", "vi /tmp/ifcfg- device_name", "mv /tmp/ifcfg- device_name /etc/sysconfig/network-scripts/" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/sn-Netconfig-s390
Deploying OpenShift Data Foundation using IBM Z
Deploying OpenShift Data Foundation using IBM Z Red Hat OpenShift Data Foundation 4.15 Instructions on deploying Red Hat OpenShift Data Foundation to use local storage on IBM Z Red Hat Storage Documentation Team Abstract Read this document for instructions about how to install Red Hat OpenShift Data Foundation to use local storage on IBM Z. Note While this document refers only to IBM Z, all information in it also applies to IBM(R) LinuxONE.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/deploying_openshift_data_foundation_using_ibm_z/index
function::umodname
function::umodname Name function::umodname - Returns the (short) name of the user module. Synopsis Arguments addr User-space address Description Returns the short name of the user space module for the current task that that the given address is part of. Reports an error when the address isn't in a (mapped in) module, or the module cannot be found for some reason.
[ "umodname:string(addr:long)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-umodname
1.4. We Need Feedback!
1.4. We Need Feedback! If you find a typographical error in this manual, or if you have thought of a way to make this manual better, we would love to hear from you! Please submit a report in Bugzilla: http://bugzilla.redhat.com/ against the product Red Hat Enterprise Linux 6 and the component doc-Logical_Volume_Manager . When submitting a bug report, be sure to mention the manual's identifier: If you have a suggestion for improving the documentation, try to be as specific as possible when describing it. If you have found an error, include the section number and some of the surrounding text so we can find it easily.
[ "Logical_Volume_Manager_Administration(EN)-6 (2017-3-8-15:20)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/sect-redhat-we_need_feedback
Chapter 8. MachineHealthCheck [machine.openshift.io/v1beta1]
Chapter 8. MachineHealthCheck [machine.openshift.io/v1beta1] Description MachineHealthCheck is the Schema for the machinehealthchecks API Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object 8.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object Specification of machine health check policy status object Most recently observed status of MachineHealthCheck resource 8.1.1. .spec Description Specification of machine health check policy Type object Property Type Description maxUnhealthy integer-or-string Any farther remediation is only allowed if at most "MaxUnhealthy" machines selected by "selector" are not healthy. Expects either a postive integer value or a percentage value. Percentage values must be positive whole numbers and are capped at 100%. Both 0 and 0% are valid and will block all remediation. nodeStartupTimeout string Machines older than this duration without a node will be considered to have failed and will be remediated. To prevent Machines without Nodes from being removed, disable startup checks by setting this value explicitly to "0". Expects an unsigned duration string of decimal numbers each with optional fraction and a unit suffix, eg "300ms", "1.5h" or "2h45m". Valid time units are "ns", "us" (or "ms"), "ms", "s", "m", "h". remediationTemplate object RemediationTemplate is a reference to a remediation template provided by an infrastructure provider. This field is completely optional, when filled, the MachineHealthCheck controller creates a new object from the template referenced and hands off remediation of the machine to a controller that lives outside of Machine API Operator. selector object Label selector to match machines whose health will be exercised. Note: An empty selector will match all machines. unhealthyConditions array UnhealthyConditions contains a list of the conditions that determine whether a node is considered unhealthy. The conditions are combined in a logical OR, i.e. if any of the conditions is met, the node is unhealthy. unhealthyConditions[] object UnhealthyCondition represents a Node condition type and value with a timeout specified as a duration. When the named condition has been in the given status for at least the timeout value, a node is considered unhealthy. 8.1.2. .spec.remediationTemplate Description RemediationTemplate is a reference to a remediation template provided by an infrastructure provider. This field is completely optional, when filled, the MachineHealthCheck controller creates a new object from the template referenced and hands off remediation of the machine to a controller that lives outside of Machine API Operator. Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 8.1.3. .spec.selector Description Label selector to match machines whose health will be exercised. Note: An empty selector will match all machines. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 8.1.4. .spec.selector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 8.1.5. .spec.selector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.6. .spec.unhealthyConditions Description UnhealthyConditions contains a list of the conditions that determine whether a node is considered unhealthy. The conditions are combined in a logical OR, i.e. if any of the conditions is met, the node is unhealthy. Type array 8.1.7. .spec.unhealthyConditions[] Description UnhealthyCondition represents a Node condition type and value with a timeout specified as a duration. When the named condition has been in the given status for at least the timeout value, a node is considered unhealthy. Type object Property Type Description status string timeout string Expects an unsigned duration string of decimal numbers each with optional fraction and a unit suffix, eg "300ms", "1.5h" or "2h45m". Valid time units are "ns", "us" (or "ms"), "ms", "s", "m", "h". type string 8.1.8. .status Description Most recently observed status of MachineHealthCheck resource Type object Property Type Description conditions array Conditions defines the current state of the MachineHealthCheck conditions[] object Condition defines an observation of a Machine API resource operational state. currentHealthy integer total number of machines counted by this machine health check expectedMachines integer total number of machines counted by this machine health check remediationsAllowed integer RemediationsAllowed is the number of further remediations allowed by this machine health check before maxUnhealthy short circuiting will be applied 8.1.9. .status.conditions Description Conditions defines the current state of the MachineHealthCheck Type array 8.1.10. .status.conditions[] Description Condition defines an observation of a Machine API resource operational state. Type object Property Type Description lastTransitionTime string Last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string A human readable message indicating details about the transition. This field may be empty. reason string The reason for the condition's last transition in CamelCase. The specific API may choose whether or not this field is considered a guaranteed API. This field may not be empty. severity string Severity provides an explicit classification of Reason code, so the users or machines can immediately understand the current situation and act accordingly. The Severity field MUST be set only when Status=False. status string Status of the condition, one of True, False, Unknown. type string Type of condition in CamelCase or in foo.example.com/CamelCase. Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deconflict is important. 8.2. API endpoints The following API endpoints are available: /apis/machine.openshift.io/v1beta1/machinehealthchecks GET : list objects of kind MachineHealthCheck /apis/machine.openshift.io/v1beta1/namespaces/{namespace}/machinehealthchecks DELETE : delete collection of MachineHealthCheck GET : list objects of kind MachineHealthCheck POST : create a MachineHealthCheck /apis/machine.openshift.io/v1beta1/namespaces/{namespace}/machinehealthchecks/{name} DELETE : delete a MachineHealthCheck GET : read the specified MachineHealthCheck PATCH : partially update the specified MachineHealthCheck PUT : replace the specified MachineHealthCheck /apis/machine.openshift.io/v1beta1/namespaces/{namespace}/machinehealthchecks/{name}/status GET : read status of the specified MachineHealthCheck PATCH : partially update status of the specified MachineHealthCheck PUT : replace status of the specified MachineHealthCheck 8.2.1. /apis/machine.openshift.io/v1beta1/machinehealthchecks Table 8.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list objects of kind MachineHealthCheck Table 8.2. HTTP responses HTTP code Reponse body 200 - OK MachineHealthCheckList schema 401 - Unauthorized Empty 8.2.2. /apis/machine.openshift.io/v1beta1/namespaces/{namespace}/machinehealthchecks Table 8.3. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 8.4. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of MachineHealthCheck Table 8.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 8.6. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind MachineHealthCheck Table 8.7. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 8.8. HTTP responses HTTP code Reponse body 200 - OK MachineHealthCheckList schema 401 - Unauthorized Empty HTTP method POST Description create a MachineHealthCheck Table 8.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.10. Body parameters Parameter Type Description body MachineHealthCheck schema Table 8.11. HTTP responses HTTP code Reponse body 200 - OK MachineHealthCheck schema 201 - Created MachineHealthCheck schema 202 - Accepted MachineHealthCheck schema 401 - Unauthorized Empty 8.2.3. /apis/machine.openshift.io/v1beta1/namespaces/{namespace}/machinehealthchecks/{name} Table 8.12. Global path parameters Parameter Type Description name string name of the MachineHealthCheck namespace string object name and auth scope, such as for teams and projects Table 8.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a MachineHealthCheck Table 8.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 8.15. Body parameters Parameter Type Description body DeleteOptions schema Table 8.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified MachineHealthCheck Table 8.17. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 8.18. HTTP responses HTTP code Reponse body 200 - OK MachineHealthCheck schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified MachineHealthCheck Table 8.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 8.20. Body parameters Parameter Type Description body Patch schema Table 8.21. HTTP responses HTTP code Reponse body 200 - OK MachineHealthCheck schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified MachineHealthCheck Table 8.22. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.23. Body parameters Parameter Type Description body MachineHealthCheck schema Table 8.24. HTTP responses HTTP code Reponse body 200 - OK MachineHealthCheck schema 201 - Created MachineHealthCheck schema 401 - Unauthorized Empty 8.2.4. /apis/machine.openshift.io/v1beta1/namespaces/{namespace}/machinehealthchecks/{name}/status Table 8.25. Global path parameters Parameter Type Description name string name of the MachineHealthCheck namespace string object name and auth scope, such as for teams and projects Table 8.26. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified MachineHealthCheck Table 8.27. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 8.28. HTTP responses HTTP code Reponse body 200 - OK MachineHealthCheck schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified MachineHealthCheck Table 8.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 8.30. Body parameters Parameter Type Description body Patch schema Table 8.31. HTTP responses HTTP code Reponse body 200 - OK MachineHealthCheck schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified MachineHealthCheck Table 8.32. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.33. Body parameters Parameter Type Description body MachineHealthCheck schema Table 8.34. HTTP responses HTTP code Reponse body 200 - OK MachineHealthCheck schema 201 - Created MachineHealthCheck schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/machine_apis/machinehealthcheck-machine-openshift-io-v1beta1
Chapter 11. Supplementary Client Variant
Chapter 11. Supplementary Client Variant The following table lists all the packages in the Supplementary Client variant. For more information about support scope, see the Scope of Coverage Details document. Package Core Package? License acroread No Commercial acroread-plugin No Commercial chromium-browser No BSD and LGPLv2+ flash-plugin No Commercial java-1.5.0-ibm No IBM Binary Code License java-1.5.0-ibm-demo No IBM Binary Code License java-1.5.0-ibm-devel No IBM Binary Code License java-1.5.0-ibm-javacomm No IBM Binary Code License java-1.5.0-ibm-jdbc No IBM Binary Code License java-1.5.0-ibm-plugin No IBM Binary Code License java-1.5.0-ibm-src No IBM Binary Code License java-1.6.0-ibm No IBM Binary Code License java-1.6.0-ibm-demo No IBM Binary Code License java-1.6.0-ibm-devel No IBM Binary Code License java-1.6.0-ibm-javacomm No IBM Binary Code License java-1.6.0-ibm-jdbc No IBM Binary Code License java-1.6.0-ibm-plugin No IBM Binary Code License java-1.6.0-ibm-src No IBM Binary Code License java-1.7.1-ibm No IBM Binary Code License java-1.7.1-ibm-demo No IBM Binary Code License java-1.7.1-ibm-devel No IBM Binary Code License java-1.7.1-ibm-jdbc No IBM Binary Code License java-1.7.1-ibm-plugin No IBM Binary Code License java-1.7.1-ibm-src No IBM Binary Code License java-1.8.0-ibm No IBM Binary Code License java-1.8.0-ibm-demo No IBM Binary Code License java-1.8.0-ibm-devel No IBM Binary Code License java-1.8.0-ibm-jdbc No IBM Binary Code License java-1.8.0-ibm-plugin No IBM Binary Code License java-1.8.0-ibm-src No IBM Binary Code License kmod-kspiceusb-rhel60 No GPLv2 spice-usb-share No Redistributable, no modification permitted system-switch-java No GPLv2+ virtio-win No Red Hat Proprietary and GPLv2
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/package_manifest/chap-supplementary-client-variant
Specialized hardware and driver enablement
Specialized hardware and driver enablement OpenShift Container Platform 4.13 Learn about hardware enablement on OpenShift Container Platform Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/specialized_hardware_and_driver_enablement/index
Chapter 24. Integrating LDAP and SSL
Chapter 24. Integrating LDAP and SSL With Red Hat Process Automation Manager you can integrate LDAP and SSL through Red Hat Single Sign-On. For more information, see the Red Hat Single Sign-On Server Administration Guide .
null
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/installing_and_configuring_red_hat_decision_manager/ldap-ssl-con_install-on-eap
Chapter 10. Host grouping concepts
Chapter 10. Host grouping concepts Apart from the physical topology of Capsule Servers, Red Hat Satellite provides several logical units for grouping hosts. Hosts that are members of those groups inherit the group configuration. For example, the simple parameters that define the provisioning environment can be applied at the following levels: The main logical groups in Red Hat Satellite are: Organizations - the highest level logical groups for hosts. Organizations provide a strong separation of content and configuration. Each organization requires a separate Red Hat Subscription Manifest, and can be thought of as a separate virtual instance of a Satellite Server. Avoid the use of organizations if a lower level host grouping is applicable. Locations - a grouping of hosts that should match the physical location. Locations can be used to map the network infrastructure to prevent incorrect host placement or configuration. For example, you cannot assign a subnet, domain, or compute resources directly to a Capsule Server, only to a location. Host groups - the main carriers of host definitions including assigned Puppet classes, Content View, or operating system. It is recommended to configure the majority of settings at the host group level instead of defining hosts directly. Configuring a new host then largely becomes a matter of adding it to the right host group. As host groups can be nested, you can create a structure that best fits your requirements (see Section 10.1, "Host group structures" ). Host collections - a host registered to Satellite Server for the purpose of subscription and content management is called content host . Content hosts can be organized into host collections, which enables performing bulk actions such as package management or errata installation. Locations and host groups can be nested. Organizations and host collections are flat. 10.1. Host group structures The fact that host groups can be nested to inherit parameters from each other allows for designing host group hierarchies that fit particular workflows. A well planned host group structure can help to simplify the maintenance of host settings. This section outlines four approaches to organizing host groups. Figure 10.1. Host group structuring examples Flat structure The advantage of a flat structure is limited complexity, as inheritance is avoided. In a deployment with few host types, this scenario is the best option. However, without inheritance there is a risk of high duplication of settings between host groups. Lifecycle environment based structure In this hierarchy, the first host group level is reserved for parameters specific to a lifecycle environment. The second level contains operating system related definitions, and the third level contains application specific settings. Such structure is useful in scenarios where responsibilities are divided among lifecycle environments (for example, a dedicated owner for the Development , QA , and Production lifecycle stages). Application based structure This hierarchy is based on roles of hosts in a specific application. For example, it enables defining network settings for groups of back-end and front-end servers. The selected characteristics of hosts are segregated, which supports Puppet-focused management of complex configurations. However, the content views can only be assigned to host groups at the bottom level of this hierarchy. Location based structure In this hierarchy, the distribution of locations is aligned with the host group structure. In a scenario where the location (Capsule Server) topology determines many other attributes, this approach is the best option. On the other hand, this structure complicates sharing parameters across locations, therefore in complex environments with a large number of applications, the number of host group changes required for each configuration change increases significantly.
[ "Global > Organization > Location > Domain > Host group > Host" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/overview_concepts_and_deployment_considerations/chap-architecture_guide-host_grouping_concepts
7.36. cvs
7.36. cvs 7.36.1. RHBA-2012:1302 - cvs bug fix update An updated cvs package that fixes two bugs is now available for Red Hat Enterprise Linux 6. [Update 19 November 2012] The file list of this advisory was updated to move the new cvs-inetd package from the base repository to the optional repository in the Client and HPC Node variants. No changes have been made to the packages themselves. The Concurrent Versions System (CVS) is a version control system that can record the history of your files. CVS only stores the differences between versions, instead of every version of every file you have ever created. CVS also keeps a log of who, when, and why changes occurred. BZ# 671145 Prior to this update, the C shell (csh) did not set the CVS_RSH environment variable to "ssh" and the remote shell (rsh) was used instead when the users accessed a remote CVS server. As a consequence, the connection was vulnerable to attacks because the remote shell is not encrypted or not necessarily enabled on every remote server. The cvs.csh script now uses valid csh syntax and the CVS_RSH environment variable is properly set at log-in. BZ# 695719 Prior to this update, the xinetd package was not a dependency of the cvs package. As a result, the CVS server was not accessible through network. With this update, the cvs-inetd package, which contains the CVS inetd configuration file, ensures that the xinetd package is installed as a dependency and the xinetd daemon is available on the system. All users of cvs are advised to upgrade to these updated packages, which fix these bugs.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/cvs
Kaoto
Kaoto Red Hat build of Apache Camel 4.8 Create and edit integrations based on Apache Camel with Kaoto
null
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/kaoto/index