title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
sequencelengths
1
5.62k
βŒ€
url
stringlengths
79
342
Chapter 14. Porting containers to systemd using Podman
Chapter 14. Porting containers to systemd using Podman Podman (Pod Manager) is a simple daemonless tool fully featured container engine. Podman provides a Docker-CLI comparable command line that makes the transition from other container engines easier and enables the management of pods, containers, and images. Originally, Podman was not designed to provide an entire Linux system or manage services, such as start-up order, dependency checking, and failed service recovery. systemd was responsible for a complete system initialization. Due to Red Hat integrating containers with systemd , you can manage OCI and Docker-formatted containers built by Podman in the same way as other services and features are managed in a Linux system. You can use the systemd initialization service to work with pods and containers. With systemd unit files, you can: Set up a container or pod to start as a systemd service. Define the order in which the containerized service runs and check for dependencies (for example making sure another service is running, a file is available or a resource is mounted). Control the state of the systemd system using the systemctl command. You can generate portable descriptions of containers and pods by using systemd unit files. 14.1. Auto-generating a systemd unit file using Quadlets With Quadlet, you describe how to run a container in a format that is very similar to regular systemd unit files. The container descriptions focus on the relevant container details and hide technical details of running containers under systemd . Create the <CTRNAME> .container unit file in one of the following directories: For root users: /usr/share/containers/systemd/ or /etc/containers/systemd/ For rootless users: USDHOME/.config/containers/systemd/ , USDXDG_CONFIG_HOME/containers/systemd/, /etc/containers/systemd/users/USD(UID) , or /etc/containers/systemd/users/ Note Quadlet is available beginning with Podman v4.6. Prerequisites The container-tools module is installed. Procedure Create the mysleep.container unit file: In the [Container] section you must specify: Image - container mage you want to tun Exec - the command you want to run inside the container This enables you to use all other fields specified in a systemd unit file. Create the mysleep.service based on the mysleep.container file: Optional: Check the status of the mysleep.service : Start the mysleep.service : Verification Check the status of the mysleep.service : List all containers: Note that the name of the created container consists of the following elements: a systemd- prefix a name of the systemd unit, that is systemd-mysleep This naming helps to distinguish common containers from containers running in systemd units. It also helps to determine which unit a container runs in. If you want to change the name of the container, use the ContainerName field in the [Container] section. Additional resources Make systemd better for Podman with Quadlet Quadlet upstream documentation 14.2. Enabling systemd services When enabling the service, you have different options. Procedure Enable the service: To enable a service at system start, no matter if user is logged in or not, enter: You have to copy the systemd unit files to the /etc/systemd/system directory. To start a service at user login and stop it at user logout, enter: You have to copy the systemd unit files to the USDHOME/.config/systemd/user directory. To enable users to start a service at system start and persist over logouts, enter: Additional resources systemctl and loginctl man pages on your system Enabling a system service to start at boot 14.3. Auto-starting containers using systemd You can control the state of the systemd system and service manager using the systemctl command. You can enable, start, stop the service as a non-root user. To install the service as a root user, omit the --user option. Prerequisites The container-tools module is installed. Procedure Reload systemd manager configuration: Enable the service container.service and start it at boot time: Start the service immediately: Check the status of the service: You can check if the service is enabled using the systemctl is-enabled container.service command. Verification List containers that are running or have exited: Note To stop container.service , enter: Additional resources systemctl man page on your system Running containers with Podman and shareable systemd services Enabling a system service to start at boot 14.4. Advantages of using Quadlets over the podman generate systemd command You can use the Quadlets tool, which describes how to run a container in a format similar to regular systemd unit files. Note Quadlet is available beginning with Podman v4.6. Quadlets have many advantages over generating unit files using the podman generate systemd command, such as: Easy to maintain : The container descriptions focus on the relevant container details and hide technical details of running containers under systemd . Automatically updated : Quadlets do not require manually regenerating unit files after an update. If a newer version of Podman is released, your service is automatically updated when the systemclt daemon-reload command is executed, for example, at boot time. Simplified workflow : Thanks to the simplified syntax, you can create Quadlet files from scratch and deploy them anywhere. Support standard systemd options : Quadlet extends the existing systemd-unit syntax with new tables, for example, a table to configure a container. Note Quadlet supports a subset of Kubernetes YAML capabilities. For more information, see the support matrix of supported YAML fields . You can generate the YAML files by using one of the following tools: Podman: podman generate kube command OpenShift: oc generate command with the --dry-run option Kubernetes: kubectl create command with the --dry-run option Quadlet supports these unit file types: Container units : Used to manage containers by running the podman run command. File extension: .container Section name: [Container] Required fields: Image describing the container image the service runs Kube units : Used to manage containers defined in Kubernetes YAML files by running the podman kube play command. File extension: .kube Section name: [Kube] Required fields: Yaml defining the path to the Kubernetes YAML file Network units : Used to create Podman networks that may be referenced in .container or .kube files. File extension: .network Section name: [Network] Required fields: None Volume units : Used to create Podman volumes that may be referenced in .container files. File extension: .volume Section name: [Volume] Required fields: None Additional resources Quadlet upstream documentation 14.5. Generating a systemd unit file using Podman Podman allows systemd to control and manage container processes. You can generate a systemd unit file for the existing containers and pods using podman generate systemd command. It is recommended to use podman generate systemd because the generated units files change frequently (via updates to Podman) and the podman generate systemd ensures that you get the latest version of unit files. Note Starting with Podman v4.6, you can use the Quadlets that describe how to run a container in a format similar to regular systemd unit files and hides the complexity of running containers under systemd . Prerequisites The container-tools module is installed. Procedure Create a container (for example myubi ): Use the container name or ID to generate the systemd unit file and direct it into the ~/.config/systemd/user/container-myubi.service file: Verification Display the content of generated systemd unit file: The Restart=on-failure line sets the restart policy and instructs systemd to restart when the service cannot be started or stopped cleanly, or when the process exits non-zero. The ExecStart line describes how we start the container. The ExecStop line describes how we stop and remove the container. Additional resources Running containers with Podman and shareable systemd services 14.6. Automatically generating a systemd unit file using Podman By default, Podman generates a unit file for existing containers or pods. You can generate more portable systemd unit files using the podman generate systemd --new . The --new flag instructs Podman to generate unit files that create, start and remove containers. Note Starting with Podman v4.6, you can use the Quadlets that describe how to run a container in a format similar to regular systemd unit files and hides the complexity of running containers under systemd . Prerequisites The container-tools module is installed. Procedure Pull the image you want to use on your system. For example, to pull the httpd-24 image: Optional: List all images available on your system: Create the httpd container: Optional: Verify the container has been created: Generate a systemd unit file for the httpd container: Display the content of the generated container-httpd.service systemd unit file: Note Unit files generated using the --new option do not expect containers and pods to exist. Therefore, they perform the podman run command when starting the service (see the ExecStart line) instead of the podman start command. For example, see section Generating a systemd unit file using Podman . The podman run command uses the following command-line options: The --conmon-pidfile option points to a path to store the process ID for the conmon process running on the host. The conmon process terminates with the same exit status as the container, which allows systemd to report the correct service status and restart the container if needed. The --cidfile option points to the path that stores the container ID. The %t is the path to the run time directory root, for example /run/user/USDUserID . The %n is the full name of the service. Copy unit files to /etc/systemd/system for installing them as a root user: Enable and start the container-httpd.service : Verification Check the status of the container-httpd.service : Additional resources Improved Systemd Integration with Podman 2.0 Enabling a system service to start at boot 14.7. Automatically starting pods using systemd You can start multiple containers as systemd services. Note that the systemctl command should only be used on the pod and you should not start or stop containers individually via systemctl , as they are managed by the pod service along with the internal infra-container. Note Starting with Podman v4.6, you can use the Quadlets that describe how to run a container in a format similar to regular systemd unit files and hides the complexity of running containers under systemd . Prerequisites The container-tools module is installed. Procedure Create an empty pod, for example named systemd-pod : Optional: List all pods: Create two containers in the empty pod. For example, to create container0 and container1 in systemd-pod : Optional: List all pods and containers associated with them: Generate the systemd unit file for the new pod: Note that three systemd unit files are generated, one for the systemd-pod pod and two for the containers container0 and container1 . Display pod-systemd-pod.service unit file: The Requires line in the [Unit] section defines dependencies on container-container0.service and container-container1.service unit files. Both unit files will be activated. The ExecStart and ExecStop lines in the [Service] section start and stop the infra-container, respectively. Display container-container0.service unit file: The BindsTo line line in the [Unit] section defines the dependency on the pod-systemd-pod.service unit file The ExecStart and ExecStop lines in the [Service] section start and stop the container0 respectively. Display container-container1.service unit file: Copy all the generated files to USDHOME/.config/systemd/user for installing as a non-root user: Enable the service and start at user login: Note that the service stops at user logout. Verification Check if the service is enabled: Additional resources podman-create , podman-generate-systemd , and systemctl man pages on your system Running containers with Podman and shareable systemd services Enabling a system service to start at boot 14.8. Automatically updating containers using Podman The podman auto-update command allows you to automatically update containers according to their auto-update policy. The podman auto-update command updates services when the container image is updated on the registry. To use auto-updates, containers must be created with the --label "io.containers.autoupdate=image" label and run in a systemd unit generated by podman generate systemd --new command. Podman searches for running containers with the "io.containers.autoupdate" label set to "image" and communicates to the container registry. If the image has changed, Podman restarts the corresponding systemd unit to stop the old container and create a new one with the new image. As a result, the container, its environment, and all dependencies, are restarted. Note Starting with Podman v4.6, you can use the Quadlets that describe how to run a container in a format similar to regular systemd unit files and hides the complexity of running containers under systemd . Prerequisites The container-tools module is installed. Procedure Start a myubi container based on the registry.access.redhat.com/ubi8/ubi-init image: Optional: List containers that are running or have exited: Generate a systemd unit file for the myubi container: Copy unit files to /usr/lib/systemd/system for installing it as a root user: Reload systemd manager configuration: Start and check the status of a container: Auto-update the container: Additional resources Improved Systemd Integration with Podman 2.0 Running containers with Podman and shareable systemd services Enabling a system service to start at boot 14.9. Automatically updating containers using systemd As mentioned in section Automatically updating containers using Podman , you can update the container using the podman auto-update command. It integrates into custom scripts and can be invoked when needed. Another way to auto update the containers is to use the pre-installed podman-auto-update.timer and podman-auto-update.service systemd service. The podman-auto-update.timer can be configured to trigger auto updates at a specific date or time. The podman-auto-update.service can further be started by the systemctl command or be used as a dependency by other systemd services. As a result, auto updates based on time and events can be triggered in various ways to meet individual needs and use cases. Note Starting with Podman v4.6, you can use the Quadlets that describe how to run a container in a format similar to regular systemd unit files and hides the complexity of running containers under systemd . Prerequisites The container-tools module is installed. Procedure Display the podman-auto-update.service unit file: Display the podman-auto-update.timer unit file: In this example, the podman auto-update command is launched daily at midnight. Enable the podman-auto-update.timer service at system start: Start the systemd service: Optional: List all timers: You can see that podman-auto-update.timer activates the podman-auto-update.service . Additional resources Improved Systemd Integration with Podman 2.0 Running containers with Podman and shareable systemd services Enabling a system service to start at boot
[ "cat USDHOME/.config/containers/systemd/mysleep.container [Unit] Description=The sleep container After=local-fs.target [Container] Image=registry.access.redhat.com/ubi8-minimal:latest Exec=sleep 1000 Start by default on boot WantedBy=multi-user.target default.target", "systemctl --user daemon-reload", "systemctl --user status mysleep.service β—‹ mysleep.service - The sleep container Loaded: loaded (/home/ username /.config/containers/systemd/mysleep.container; generated) Active: inactive (dead)", "systemctl --user start mysleep.service", "systemctl --user status mysleep.service ● mysleep.service - The sleep container Loaded: loaded (/home/ username /.config/containers/systemd/mysleep.container; generated) Active: active (running) since Thu 2023-02-09 18:07:23 EST; 2s ago Main PID: 265651 (conmon) Tasks: 3 (limit: 76815) Memory: 1.6M CPU: 94ms CGroup:", "podman ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 421c8293fc1b registry.access.redhat.com/ubi8-minimal:latest sleep 1000 30 seconds ago Up 10 seconds ago systemd-mysleep", "systemctl enable <service>", "systemctl --user enable <service>", "loginctl enable-linger <username>", "systemctl --user daemon-reload", "systemctl --user enable container.service", "systemctl --user start container.service", "systemctl --user status container.service ● container.service - Podman container.service Loaded: loaded (/home/user/.config/systemd/user/container.service; enabled; vendor preset: enabled) Active: active (running) since Wed 2020-09-16 11:56:57 CEST; 8s ago Docs: man:podman-generate-systemd(1) Process: 80602 ExecStart=/usr/bin/podman run --conmon-pidfile //run/user/1000/container.service-pid --cidfile //run/user/1000/container.service-cid -d ubi8-minimal:> Process: 80601 ExecStartPre=/usr/bin/rm -f //run/user/1000/container.service-pid //run/user/1000/container.service-cid (code=exited, status=0/SUCCESS) Main PID: 80617 (conmon) CGroup: /user.slice/user-1000.slice/[email protected]/container.service β”œβ”€ 2870 /usr/bin/podman β”œβ”€80612 /usr/bin/slirp4netns --disable-host-loopback --mtu 65520 --enable-sandbox --enable-seccomp -c -e 3 -r 4 --netns-type=path /run/user/1000/netns/cni-> β”œβ”€80614 /usr/bin/fuse-overlayfs -o lowerdir=/home/user/.local/share/containers/storage/overlay/l/YJSPGXM2OCDZPLMLXJOW3NRF6Q:/home/user/.local/share/contain> β”œβ”€80617 /usr/bin/conmon --api-version 1 -c cbc75d6031508dfd3d78a74a03e4ace1732b51223e72a2ce4aa3bfe10a78e4fa -u cbc75d6031508dfd3d78a74a03e4ace1732b51223e72> └─cbc75d6031508dfd3d78a74a03e4ace1732b51223e72a2ce4aa3bfe10a78e4fa └─80626 /usr/bin/coreutils --coreutils-prog-shebang=sleep /usr/bin/sleep 1d", "podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES f20988d59920 registry.access.redhat.com/ubi8-minimal:latest top 12 seconds ago Up 11 seconds ago funny_zhukovsky", "systemctl --user stop container.service", "podman create --name myubi registry.access.redhat.com/ubi8:latest sleep infinity 0280afe98bb75a5c5e713b28de4b7c5cb49f156f1cce4a208f13fee2f75cb453", "podman generate systemd --name myubi > ~/.config/systemd/user/container-myubi.service", "cat ~/.config/systemd/user/container-myubi.service container-myubi.service autogenerated by Podman 3.3.1 Wed Sep 8 20:34:46 CEST 2021 [Unit] Description=Podman container-myubi.service Documentation=man:podman-generate-systemd(1) Wants=network-online.target After=network-online.target RequiresMountsFor=/run/user/1000/containers [Service] Environment=PODMAN_SYSTEMD_UNIT=%n Restart=on-failure TimeoutStopSec=70 ExecStart=/usr/bin/podman start myubi ExecStop=/usr/bin/podman stop -t 10 myubi ExecStopPost=/usr/bin/podman stop -t 10 myubi PIDFile=/run/user/1000/containers/overlay-containers/9683103f58a32192c84801f0be93446cb33c1ee7d9cdda225b78049d7c5deea4/userdata/conmon.pid Type=forking [Install] WantedBy=multi-user.target default.target", "podman pull registry.access.redhat.com/ubi8/httpd-24", "podman images REPOSITORY TAG IMAGE ID CREATED SIZE registry.access.redhat.com/ubi8/httpd-24 latest 8594be0a0b57 2 weeks ago 462 MB", "podman create --name httpd -p 8080:8080 registry.access.redhat.com/ubi8/httpd-24 cdb9f981cf143021b1679599d860026b13a77187f75e46cc0eac85293710a4b1", "podman ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES cdb9f981cf14 registry.access.redhat.com/ubi8/httpd-24:latest /usr/bin/run-http... 5 minutes ago Created 0.0.0.0:8080->8080/tcp httpd", "podman generate systemd --new --files --name httpd /root/container-httpd.service", "cat /root/container-httpd.service container-httpd.service autogenerated by Podman 3.3.1 Wed Sep 8 20:41:44 CEST 2021 [Unit] Description=Podman container-httpd.service Documentation=man:podman-generate-systemd(1) Wants=network-online.target After=network-online.target RequiresMountsFor=%t/containers [Service] Environment=PODMAN_SYSTEMD_UNIT=%n Restart=on-failure TimeoutStopSec=70 ExecStartPre=/bin/rm -f %t/%n.ctr-id ExecStart=/usr/bin/podman run --cidfile=%t/%n.ctr-id --sdnotify=conmon --cgroups=no-conmon --rm -d --replace --name httpd -p 8080:8080 registry.access.redhat.com/ubi8/httpd-24 ExecStop=/usr/bin/podman stop --ignore --cidfile=%t/%n.ctr-id ExecStopPost=/usr/bin/podman rm -f --ignore --cidfile=%t/%n.ctr-id Type=notify NotifyAccess=all [Install] WantedBy=multi-user.target default.target", "cp -Z container-httpd.service /etc/systemd/system", "systemctl daemon-reload systemctl enable --now container-httpd.service Created symlink /etc/systemd/system/multi-user.target.wants/container-httpd.service /etc/systemd/system/container-httpd.service. Created symlink /etc/systemd/system/default.target.wants/container-httpd.service /etc/systemd/system/container-httpd.service.", "systemctl status container-httpd.service ● container-httpd.service - Podman container-httpd.service Loaded: loaded (/etc/systemd/system/container-httpd.service; enabled; vendor preset: disabled) Active: active (running) since Tue 2021-08-24 09:53:40 EDT; 1min 5s ago Docs: man:podman-generate-systemd(1) Process: 493317 ExecStart=/usr/bin/podman run --conmon-pidfile /run/container-httpd.pid --cidfile /run/container-httpd.ctr-id --cgroups=no-conmon -d --repla> Process: 493315 ExecStartPre=/bin/rm -f /run/container-httpd.pid /run/container-httpd.ctr-id (code=exited, status=0/SUCCESS) Main PID: 493435 (conmon)", "podman pod create --name systemd-pod 11d4646ba41b1fffa51c108cbdf97cfab3213f7bd9b3e1ca52fe81b90fed5577", "podman pod ps POD ID NAME STATUS CREATED # OF CONTAINERS INFRA ID 11d4646ba41b systemd-pod Created 40 seconds ago 1 8a428b257111 11d4646ba41b1fffa51c108cbdf97cfab3213f7bd9b3e1ca52fe81b90fed5577", "podman create --pod systemd-pod --name container0 registry.access.redhat.com/ubi 8 top podman create --pod systemd-pod --name container1 registry.access.redhat.com/ubi 8 top", "podman ps -a --pod CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES POD ID PODNAME 24666f47d9b2 registry.access.redhat.com/ubi8:latest top 3 minutes ago Created container0 3130f724e229 systemd-pod 56eb1bf0cdfe k8s.gcr.io/pause:3.2 4 minutes ago Created 3130f724e229-infra 3130f724e229 systemd-pod 62118d170e43 registry.access.redhat.com/ubi8:latest top 3 seconds ago Created container1 3130f724e229 systemd-pod", "podman generate systemd --files --name systemd-pod /home/user1/pod-systemd-pod.service /home/user1/container-container0.service /home/user1/container-container1.service", "cat pod-systemd-pod.service pod-systemd-pod.service autogenerated by Podman 3.3.1 Wed Sep 8 20:49:17 CEST 2021 [Unit] Description=Podman pod-systemd-pod.service Documentation=man:podman-generate-systemd(1) Wants=network-online.target After=network-online.target RequiresMountsFor= Requires=container-container0.service container-container1.service Before=container-container0.service container-container1.service [Service] Environment=PODMAN_SYSTEMD_UNIT=%n Restart=on-failure TimeoutStopSec=70 ExecStart=/usr/bin/podman start bcb128965b8e-infra ExecStop=/usr/bin/podman stop -t 10 bcb128965b8e-infra ExecStopPost=/usr/bin/podman stop -t 10 bcb128965b8e-infra PIDFile=/run/user/1000/containers/overlay-containers/1dfdcf20e35043939ea3f80f002c65c00d560e47223685dbc3230e26fe001b29/userdata/conmon.pid Type=forking [Install] WantedBy=multi-user.target default.target", "cat container-container0.service container-container0.service autogenerated by Podman 3.3.1 Wed Sep 8 20:49:17 CEST 2021 [Unit] Description=Podman container-container0.service Documentation=man:podman-generate-systemd(1) Wants=network-online.target After=network-online.target RequiresMountsFor=/run/user/1000/containers BindsTo=pod-systemd-pod.service After=pod-systemd-pod.service [Service] Environment=PODMAN_SYSTEMD_UNIT=%n Restart=on-failure TimeoutStopSec=70 ExecStart=/usr/bin/podman start container0 ExecStop=/usr/bin/podman stop -t 10 container0 ExecStopPost=/usr/bin/podman stop -t 10 container0 PIDFile=/run/user/1000/containers/overlay-containers/4bccd7c8616ae5909b05317df4066fa90a64a067375af5996fdef9152f6d51f5/userdata/conmon.pid Type=forking [Install] WantedBy=multi-user.target default.target", "cat container-container1.service", "cp pod-systemd-pod.service container-container0.service container-container1.service USDHOME/.config/systemd/user", "systemctl enable --user pod-systemd-pod.service Created symlink /home/user1/.config/systemd/user/multi-user.target.wants/pod-systemd-pod.service /home/user1/.config/systemd/user/pod-systemd-pod.service. Created symlink /home/user1/.config/systemd/user/default.target.wants/pod-systemd-pod.service /home/user1/.config/systemd/user/pod-systemd-pod.service.", "systemctl is-enabled pod-systemd-pod.service enabled", "podman run --label \"io.containers.autoupdate=image\" --name myubi -dt registry.access.redhat.com/ubi8/ubi-init top bc219740a210455fa27deacc96d50a9e20516492f1417507c13ce1533dbdcd9d", "podman ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 76465a5e2933 registry.access.redhat.com/8/ubi-init:latest top 24 seconds ago Up 23 seconds ago myubi", "podman generate systemd --new --files --name myubi /root/container-myubi.service", "cp -Z ~/container-myubi.service /usr/lib/systemd/system", "systemctl daemon-reload", "systemctl start container-myubi.service systemctl status container-myubi.service", "podman auto-update", "cat /usr/lib/systemd/system/podman-auto-update.service [Unit] Description=Podman auto-update service Documentation=man:podman-auto-update(1) Wants=network.target After=network-online.target [Service] Type=oneshot ExecStart=/usr/bin/podman auto-update [Install] WantedBy=multi-user.target default.target", "cat /usr/lib/systemd/system/podman-auto-update.timer [Unit] Description=Podman auto-update timer [Timer] OnCalendar=daily Persistent=true [Install] WantedBy=timers.target", "systemctl enable podman-auto-update.timer", "systemctl start podman-auto-update.timer", "systemctl list-timers --all NEXT LEFT LAST PASSED UNIT ACTIVATES Wed 2020-12-09 00:00:00 CET 9h left n/a n/a podman-auto-update.timer podman-auto-update.service" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/building_running_and_managing_containers/assembly_porting-containers-to-systemd-using-podman_building-running-and-managing-containers
5.4. Deleting a Tag
5.4. Deleting a Tag When a tag is no longer needed, remove it. Deleting a Tag Click the Tags icon ( ) in the header bar. Select the tag you want to delete and click Remove . A message warns you that removing the tag will also remove all descendants of the tag. Click OK . You have removed the tag and all its descendants. The tag is also removed from all the objects that it was attached to.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/deleting_a_tag
Chapter 59. JmxTransSpec schema reference
Chapter 59. JmxTransSpec schema reference The type JmxTransSpec has been deprecated. Used in: KafkaSpec Property Property type Description image string The image to use for the JmxTrans. outputDefinitions JmxTransOutputDefinitionTemplate array Defines the output hosts that will be referenced later on. For more information on these properties see, JmxTransOutputDefinitionTemplate schema reference . logLevel string Sets the logging level of the JmxTrans deployment.For more information see, JmxTrans Logging Level . kafkaQueries JmxTransQueryTemplate array Queries to send to the Kafka brokers to define what data should be read from each broker. For more information on these properties see, JmxTransQueryTemplate schema reference . resources ResourceRequirements CPU and memory resources to reserve. template JmxTransTemplate Template for JmxTrans resources.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-jmxtransspec-reference
Chapter 3. Prerequisites for Configuring Capsule Servers for Load Balancing
Chapter 3. Prerequisites for Configuring Capsule Servers for Load Balancing To configure Capsule Servers for load balancing, complete the following procedures described in Installing Capsule Server . Satellite does not support configuring existing Capsule Servers for load balancing. Registering Capsule Server to Satellite Server Attaching the Satellite Infrastructure Subscription Configuring Repositories Synchronizing the System Clock With chronyd Installing Capsule Server Packages
null
https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/configuring_capsules_with_a_load_balancer/preparing-satellite-server-and-capsule-servers
Configuring and managing high availability clusters
Configuring and managing high availability clusters Red Hat Enterprise Linux 8 Using the Red Hat High Availability Add-On to create and maintain Pacemaker clusters Red Hat Customer Content Services
[ "yum install pcs pacemaker fence-agents-all systemctl start pcsd.service systemctl enable pcsd.service", "firewall-cmd --permanent --add-service=high-availability firewall-cmd --reload", "passwd hacluster pcs host auth z1.example.com", "pcs cluster setup my_cluster --start z1.example.com pcs cluster status Cluster Status: Stack: corosync Current DC: z1.example.com (version 2.0.0-10.el8-b67d8d0de9) - partition with quorum Last updated: Thu Oct 11 16:11:18 2018 Last change: Thu Oct 11 16:11:00 2018 by hacluster via crmd on z1.example.com 1 node configured 0 resources configured PCSD Status: z1.example.com: Online", "pcs property set stonith-enabled=false", "yum install -y httpd wget firewall-cmd --permanent --add-service=http firewall-cmd --reload cat <<-END >/var/www/html/index.html <html> <body>My Test Site - USD(hostname)</body> </html> END", "cat <<-END > /etc/httpd/conf.d/status.conf <Location /server-status> SetHandler server-status Order deny,allow Deny from all Allow from 127.0.0.1 Allow from ::1 </Location> END", "pcs resource describe apache", "pcs resource create ClusterIP ocf:heartbeat:IPaddr2 ip=192.168.122.120 --group apachegroup pcs resource create WebSite ocf:heartbeat:apache configfile=/etc/httpd/conf/httpd.conf statusurl=\"http://localhost/server-status\" --group apachegroup pcs status Cluster name: my_cluster Stack: corosync Current DC: z1.example.com (version 2.0.0-10.el8-b67d8d0de9) - partition with quorum Last updated: Fri Oct 12 09:54:33 2018 Last change: Fri Oct 12 09:54:30 2018 by root via cibadmin on z1.example.com 1 node configured 2 resources configured Online: [ z1.example.com ] Full list of resources: Resource Group: apachegroup ClusterIP (ocf::heartbeat:IPaddr2): Started z1.example.com WebSite (ocf::heartbeat:apache): Started z1.example.com PCSD Status: z1.example.com: Online", "pcs resource config WebSite Resource: WebSite (class=ocf provider=heartbeat type=apache) Attributes: configfile=/etc/httpd/conf/httpd.conf statusurl=http://localhost/server-status Operations: start interval=0s timeout=40s (WebSite-start-interval-0s) stop interval=0s timeout=60s (WebSite-stop-interval-0s) monitor interval=1min (WebSite-monitor-interval-1min)", "killall -9 httpd", "pcs status Cluster name: my_cluster Current DC: z1.example.com (version 1.1.13-10.el7-44eb2dd) - partition with quorum 1 node and 2 resources configured Online: [ z1.example.com ] Full list of resources: Resource Group: apachegroup ClusterIP (ocf::heartbeat:IPaddr2): Started z1.example.com WebSite (ocf::heartbeat:apache): Started z1.example.com Failed Resource Actions: * WebSite_monitor_60000 on z1.example.com 'not running' (7): call=13, status=complete, exitreason='none', last-rc-change='Thu Oct 11 23:45:50 2016', queued=0ms, exec=0ms PCSD Status: z1.example.com: Online", "pcs resource cleanup WebSite", "pcs cluster stop --all", "yum install pcs pacemaker fence-agents-all systemctl start pcsd.service systemctl enable pcsd.service", "firewall-cmd --permanent --add-service=high-availability firewall-cmd --reload", "passwd hacluster", "pcs host auth z1.example.com z2.example.com", "pcs cluster setup my_cluster --start z1.example.com z2.example.com", "pcs property set stonith-enabled=false", "pcs cluster status Cluster Status: Stack: corosync Current DC: z1.example.com (version 2.0.0-10.el8-b67d8d0de9) - partition with quorum Last updated: Thu Oct 11 16:11:18 2018 Last change: Thu Oct 11 16:11:00 2018 by hacluster via crmd on z1.example.com 2 nodes configured 0 resources configured PCSD Status: z1.example.com: Online z2.example.com: Online", "yum install -y httpd wget firewall-cmd --permanent --add-service=http firewall-cmd --reload cat <<-END >/var/www/html/index.html <html> <body>My Test Site - USD(hostname)</body> </html> END", "cat <<-END > /etc/httpd/conf.d/status.conf <Location /server-status> SetHandler server-status Order deny,allow Deny from all Allow from 127.0.0.1 Allow from ::1 </Location> END", "pcs resource describe apache", "pcs resource create ClusterIP ocf:heartbeat:IPaddr2 ip=192.168.122.120 --group apachegroup pcs resource create WebSite ocf:heartbeat:apache configfile=/etc/httpd/conf/httpd.conf statusurl=\"http://localhost/server-status\" --group apachegroup pcs status Cluster name: my_cluster Stack: corosync Current DC: z1.example.com (version 2.0.0-10.el8-b67d8d0de9) - partition with quorum Last updated: Fri Oct 12 09:54:33 2018 Last change: Fri Oct 12 09:54:30 2018 by root via cibadmin on z1.example.com 2 nodes configured 2 resources configured Online: [ z1.example.com z2.example.com ] Full list of resources: Resource Group: apachegroup ClusterIP (ocf::heartbeat:IPaddr2): Started z1.example.com WebSite (ocf::heartbeat:apache): Started z1.example.com PCSD Status: z1.example.com: Online z2.example.com: Online", "killall -9 httpd", "pcs status Cluster name: my_cluster Stack: corosync Current DC: z1.example.com (version 2.0.0-10.el8-b67d8d0de9) - partition with quorum Last updated: Fri Oct 12 09:54:33 2018 Last change: Fri Oct 12 09:54:30 2018 by root via cibadmin on z1.example.com 2 nodes configured 2 resources configured Online: [ z1.example.com z2.example.com ] Full list of resources: Resource Group: apachegroup ClusterIP (ocf::heartbeat:IPaddr2): Started z1.example.com WebSite (ocf::heartbeat:apache): Started z1.example.com Failed Resource Actions: * WebSite_monitor_60000 on z1.example.com 'not running' (7): call=31, status=complete, exitreason='none', last-rc-change='Fri Feb 5 21:01:41 2016', queued=0ms, exec=0ms", "pcs resource cleanup WebSite", "pcs node standby z1.example.com", "pcs status Cluster name: my_cluster Stack: corosync Current DC: z1.example.com (version 2.0.0-10.el8-b67d8d0de9) - partition with quorum Last updated: Fri Oct 12 09:54:33 2018 Last change: Fri Oct 12 09:54:30 2018 by root via cibadmin on z1.example.com 2 nodes configured 2 resources configured Node z1.example.com: standby Online: [ z2.example.com ] Full list of resources: Resource Group: apachegroup ClusterIP (ocf::heartbeat:IPaddr2): Started z2.example.com WebSite (ocf::heartbeat:apache): Started z2.example.com", "pcs node unstandby z1.example.com", "pcs cluster stop --all", "pcs resource -h", "pcs cluster cib filename", "pcs cluster cib testfile", "pcs cluster cib original.xml", "cp original.xml updated.xml", "pcs -f updated.xml resource create VirtualIP ocf:heartbeat:IPaddr2 ip=192.168.0.120 op monitor interval=30s", "pcs cluster cib-push updated.xml diff-against=original.xml", "pcs cluster cib-push filename", "pcs cluster cib-push --config filename", "pcs status", "pcs status commands", "pcs status resources", "pcs cluster status", "pcs config", "pcs cluster config update [transport pass:quotes[ transport options ]] [compression pass:quotes[ compression options ]] [crypto pass:quotes[ crypto options ]] [totem pass:quotes[ totem options ]] [--corosync_conf pass:quotes[ path ]]", "pcs cluster config update transport knet_pmtud_interval=35 totem token=10000 join=100", "pcs cluster corosync", "pcs cluster config Cluster Name: HACluster Cluster UUID: ad4ae07dcafe4066b01f1cc9391f54f5 Transport: knet Nodes: r8-node-01: Link 0 address: r8-node-01 Link 1 address: 192.168.122.121 nodeid: 1 r8-node-02: Link 0 address: r8-node-02 Link 1 address: 192.168.122.122 nodeid: 2 Links: Link 1: linknumber: 1 ping_interval: 1000 ping_timeout: 2000 pong_count: 5 Compression Options: level: 9 model: zlib threshold: 150 Crypto Options: cipher: aes256 hash: sha256 Totem Options: downcheck: 2000 join: 50 token: 10000 Quorum Device: net Options: sync_timeout: 2000 timeout: 3000 Model Options: algorithm: lms host: r8-node-03 Heuristics: exec_ping: ping -c 1 127.0.0.1", "pcs cluster config show --output-format=cmd pcs cluster setup HACluster r8-node-01 addr=r8-node-01 addr=192.168.122.121 r8-node-02 addr=r8-node-02 addr=192.168.122.122 transport knet link linknumber=1 ping_interval=1000 ping_timeout=2000 pong_count=5 compression level=9 model=zlib threshold=150 crypto cipher=aes256 hash=sha256 totem downcheck=2000 join=50 token=10000", "subscription-manager repos --enable=rhel-8-for-x86_64-highavailability-rpms", "yum install pcs pacemaker fence-agents-all", "yum install pcs pacemaker fence-agents- model", "rpm -q -a | grep fence fence-agents-rhevm-4.0.2-3.el7.x86_64 fence-agents-ilo-mp-4.0.2-3.el7.x86_64 fence-agents-ipmilan-4.0.2-3.el7.x86_64", "firewall-cmd --permanent --add-service=high-availability firewall-cmd --add-service=high-availability", "passwd hacluster Changing password for user hacluster. New password: Retype new password: passwd: all authentication tokens updated successfully.", "systemctl start pcsd.service systemctl enable pcsd.service", "yum install pcp-zeroconf", "pcs host auth z1.example.com z2.example.com Username: hacluster Password: z1.example.com: Authorized z2.example.com: Authorized", "pcs cluster setup my_cluster --start z1.example.com z2.example.com", "pcs cluster enable --all", "pcs cluster status Cluster Status: Stack: corosync Current DC: z2.example.com (version 2.0.0-10.el8-b67d8d0de9) - partition with quorum Last updated: Thu Oct 11 16:11:18 2018 Last change: Thu Oct 11 16:11:00 2018 by hacluster via crmd on z2.example.com 2 Nodes configured 0 Resources configured", "pcs cluster setup pass:quotes[ cluster_name ] pass:quotes[ node1_name ] addr=pass:quotes[ node1_link0_address ] addr=pass:quotes[ node1_link1_address ] pass:quotes[ node2_name ] addr=pass:quotes[ node2_link0_address ] addr=pass:quotes[ node2_link1_address ]", "pcs cluster setup my_twolink_cluster rh80-node1 addr=192.168.122.201 addr=192.168.123.201 rh80-node2 addr=192.168.122.202 addr=192.168.123.202", "pcs cluster setup my_twolink_cluster rh80-node1 addr=192.168.122.201 addr=192.168.123.201 rh80-node2 addr=192.168.122.202 addr=192.168.123.202 transport knet link link_priority=1 link link_priority=0 pcs cluster setup my_twolink_cluster rh80-node1 addr=192.168.122.201 addr=192.168.123.201 rh80-node2 addr=192.168.122.202 addr=192.168.123.202 transport knet link linknumber=1 link_priority=0 link link_priority=1", "pcs cluster setup my_twolink_cluster rh80-node1 addr=192.168.122.201 addr=192.168.123.201 rh80-node2 addr=192.168.122.202 addr=192.168.123.202 transport knet link_mode=active", "pcs cluster setup my_twolink_cluster rh80-node1 addr=192.168.122.201 addr=192.168.123.201 rh80-node2 addr=192.168.122.202 addr=192.168.123.202 transport knet link_mode=active link link_priority=1 link link_priority=0", "pcs stonith create myapc fence_apc_snmp ipaddr=\"zapc.example.com\" pcmk_host_map=\"z1.example.com:1;z2.example.com:2\" login=\"apc\" passwd=\"apc\"", "pcs stonith config myapc Resource: myapc (class=stonith type=fence_apc_snmp) Attributes: ipaddr=zapc.example.com pcmk_host_map=z1.example.com:1;z2.example.com:2 login=apc passwd=apc Operations: monitor interval=60s (myapc-monitor-interval-60s)", "pcs config backup filename", "pcs config restore [--local] [ filename ]", "firewall-cmd --permanent --add-service=high-availability firewall-cmd --add-service=high-availability", "Configuration option global/system_id_source. system_id_source = \"uname\"", "lvm systemid system ID: z1.example.com uname -n z1.example.com", "pvcreate /dev/sdb1 Physical volume \"/dev/sdb1\" successfully created", "vgcreate --setautoactivation n my_vg /dev/sdb1 Volume group \"my_vg\" successfully created", "vgcreate my_vg /dev/sdb1 Volume group \"my_vg\" successfully created", "vgs -o+systemid VG #PV #LV #SN Attr VSize VFree System ID my_vg 1 0 0 wz--n- <1.82t <1.82t z1.example.com", "lvcreate -L450 -n my_lv my_vg Rounding up size to full physical extent 452.00 MiB Logical volume \"my_lv\" created", "lvs LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert my_lv my_vg -wi-a---- 452.00m", "mkfs.xfs /dev/my_vg/my_lv meta-data=/dev/my_vg/my_lv isize=512 agcount=4, agsize=28928 blks = sectsz=512 attr=2, projid32bit=1", "lvmdevices --adddev /dev/sdb1", "vgs --noheadings -o vg_name my_vg rhel_home rhel_root", "auto_activation_volume_list = [ \"rhel_root\", \"rhel_home\" ]", "dracut -H -f /boot/initramfs-USD(uname -r).img USD(uname -r)", "pcs cluster start", "pcs cluster start --all", "yum install -y httpd wget", "firewall-cmd --permanent --add-service=http firewall-cmd --permanent --zone=public --add-service=http firewall-cmd --reload", "cat <<-END > /etc/httpd/conf.d/status.conf <Location /server-status> SetHandler server-status Require local </Location> END", "lvchange -ay my_vg/my_lv mount /dev/my_vg/my_lv /var/www/ mkdir /var/www/html mkdir /var/www/cgi-bin mkdir /var/www/error restorecon -R /var/www cat <<-END >/var/www/html/index.html <html> <body>Hello</body> </html> END umount /var/www", "pcs resource create my_lvm ocf:heartbeat:LVM-activate vgname=my_vg vg_access_mode=system_id --group apachegroup", "pcs resource status Resource Group: apachegroup my_lvm (ocf::heartbeat:LVM-activate): Started", "pcs resource create my_fs Filesystem device=\"/dev/my_vg/my_lv\" directory=\"/var/www\" fstype=\"xfs\" --group apachegroup pcs resource create VirtualIP IPaddr2 ip=198.51.100.3 cidr_netmask=24 --group apachegroup pcs resource create Website apache configfile=\"/etc/httpd/conf/httpd.conf\" statusurl=\"http://127.0.0.1/server-status\" --group apachegroup", "pcs status Cluster name: my_cluster Last updated: Wed Jul 31 16:38:51 2013 Last change: Wed Jul 31 16:42:14 2013 via crm_attribute on z1.example.com Stack: corosync Current DC: z2.example.com (2) - partition with quorum Version: 1.1.10-5.el7-9abe687 2 Nodes configured 6 Resources configured Online: [ z1.example.com z2.example.com ] Full list of resources: myapc (stonith:fence_apc_snmp): Started z1.example.com Resource Group: apachegroup my_lvm (ocf::heartbeat:LVM-activate): Started z1.example.com my_fs (ocf::heartbeat:Filesystem): Started z1.example.com VirtualIP (ocf::heartbeat:IPaddr2): Started z1.example.com Website (ocf::heartbeat:apache): Started z1.example.com", "Hello", "/bin/systemctl reload httpd.service > /dev/null 2>/dev/null || true", "/usr/bin/test -f /var/run/httpd-Website.pid >/dev/null 2>/dev/null && /usr/bin/ps -q USD(/usr/bin/cat /var/run/httpd-Website.pid) >/dev/null 2>/dev/null && /usr/sbin/httpd -f /etc/httpd/conf/httpd.conf -c \"PidFile /var/run/httpd-Website.pid\" -k graceful > /dev/null 2>/dev/null || true", "/usr/bin/test -f /run/httpd.pid >/dev/null 2>/dev/null && /usr/bin/ps -q USD(/usr/bin/cat /run/httpd.pid) >/dev/null 2>/dev/null && /usr/sbin/httpd -f /etc/httpd/conf/httpd.conf -c \"PidFile /run/httpd.pid\" -k graceful > /dev/null 2>/dev/null || true", "pcs node standby z1.example.com", "pcs status Cluster name: my_cluster Last updated: Wed Jul 31 17:16:17 2013 Last change: Wed Jul 31 17:18:34 2013 via crm_attribute on z1.example.com Stack: corosync Current DC: z2.example.com (2) - partition with quorum Version: 1.1.10-5.el7-9abe687 2 Nodes configured 6 Resources configured Node z1.example.com (1): standby Online: [ z2.example.com ] Full list of resources: myapc (stonith:fence_apc_snmp): Started z1.example.com Resource Group: apachegroup my_lvm (ocf::heartbeat:LVM-activate): Started z2.example.com my_fs (ocf::heartbeat:Filesystem): Started z2.example.com VirtualIP (ocf::heartbeat:IPaddr2): Started z2.example.com Website (ocf::heartbeat:apache): Started z2.example.com", "pcs node unstandby z1.example.com", "Configuration option global/system_id_source. system_id_source = \"uname\"", "lvm systemid system ID: z1.example.com uname -n z1.example.com", "pvcreate /dev/sdb1 Physical volume \"/dev/sdb1\" successfully created", "vgcreate --setautoactivation n my_vg /dev/sdb1 Volume group \"my_vg\" successfully created", "vgcreate my_vg /dev/sdb1 Volume group \"my_vg\" successfully created", "vgs -o+systemid VG #PV #LV #SN Attr VSize VFree System ID my_vg 1 0 0 wz--n- <1.82t <1.82t z1.example.com", "lvcreate -L450 -n my_lv my_vg Rounding up size to full physical extent 452.00 MiB Logical volume \"my_lv\" created", "lvs LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert my_lv my_vg -wi-a---- 452.00m", "mkfs.xfs /dev/my_vg/my_lv meta-data=/dev/my_vg/my_lv isize=512 agcount=4, agsize=28928 blks = sectsz=512 attr=2, projid32bit=1", "lvmdevices --adddev /dev/sdb1", "vgs --noheadings -o vg_name my_vg rhel_home rhel_root", "auto_activation_volume_list = [ \"rhel_root\", \"rhel_home\" ]", "dracut -H -f /boot/initramfs-USD(uname -r).img USD(uname -r)", "pcs cluster start", "pcs cluster start --all", "mkdir /nfsshare", "lvchange -ay my_vg/my_lv mount /dev/my_vg/my_lv /nfsshare", "mkdir -p /nfsshare/exports mkdir -p /nfsshare/exports/export1 mkdir -p /nfsshare/exports/export2", "touch /nfsshare/exports/export1/clientdatafile1 touch /nfsshare/exports/export2/clientdatafile2", "umount /dev/my_vg/my_lv vgchange -an my_vg", "pcs resource create my_lvm ocf:heartbeat:LVM-activate vgname=my_vg vg_access_mode=system_id --group nfsgroup", "root@z1 ~]# pcs status Cluster name: my_cluster Last updated: Thu Jan 8 11:13:17 2015 Last change: Thu Jan 8 11:13:08 2015 Stack: corosync Current DC: z2.example.com (2) - partition with quorum Version: 1.1.12-a14efad 2 Nodes configured 3 Resources configured Online: [ z1.example.com z2.example.com ] Full list of resources: myapc (stonith:fence_apc_snmp): Started z1.example.com Resource Group: nfsgroup my_lvm (ocf::heartbeat:LVM-activate): Started z1.example.com PCSD Status: z1.example.com: Online z2.example.com: Online Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled", "pcs resource create nfsshare Filesystem device=/dev/my_vg/my_lv directory=/nfsshare fstype=xfs --group nfsgroup", "pcs status Full list of resources: myapc (stonith:fence_apc_snmp): Started z1.example.com Resource Group: nfsgroup my_lvm (ocf::heartbeat:LVM-activate): Started z1.example.com nfsshare (ocf::heartbeat:Filesystem): Started z1.example.com", "pcs resource create nfs-daemon nfsserver nfs_shared_infodir=/nfsshare/nfsinfo nfs_no_notify=true --group nfsgroup pcs status", "pcs resource create nfs-root exportfs clientspec=192.168.122.0/255.255.255.0 options=rw,sync,no_root_squash directory=/nfsshare/exports fsid=0 --group nfsgroup pcs resource create nfs-export1 exportfs clientspec=192.168.122.0/255.255.255.0 options=rw,sync,no_root_squash directory=/nfsshare/exports/export1 fsid=1 --group nfsgroup pcs resource create nfs-export2 exportfs clientspec=192.168.122.0/255.255.255.0 options=rw,sync,no_root_squash directory=/nfsshare/exports/export2 fsid=2 --group nfsgroup", "pcs resource create nfs_ip IPaddr2 ip=192.168.122.200 cidr_netmask=24 --group nfsgroup", "pcs resource create nfs-notify nfsnotify source_host=192.168.122.200 --group nfsgroup", "pcs status Full list of resources: myapc (stonith:fence_apc_snmp): Started z1.example.com Resource Group: nfsgroup my_lvm (ocf::heartbeat:LVM-activate): Started z1.example.com nfsshare (ocf::heartbeat:Filesystem): Started z1.example.com nfs-daemon (ocf::heartbeat:nfsserver): Started z1.example.com nfs-root (ocf::heartbeat:exportfs): Started z1.example.com nfs-export1 (ocf::heartbeat:exportfs): Started z1.example.com nfs-export2 (ocf::heartbeat:exportfs): Started z1.example.com nfs_ip (ocf::heartbeat:IPaddr2): Started z1.example.com nfs-notify (ocf::heartbeat:nfsnotify): Started z1.example.com", "showmount -e 192.168.122.200 Export list for 192.168.122.200: /nfsshare/exports/export1 192.168.122.0/255.255.255.0 /nfsshare/exports 192.168.122.0/255.255.255.0 /nfsshare/exports/export2 192.168.122.0/255.255.255.0", "mkdir nfsshare mount -o \"vers=4\" 192.168.122.200:export1 nfsshare ls nfsshare clientdatafile1 umount nfsshare", "mkdir nfsshare mount -o \"vers=3\" 192.168.122.200:/nfsshare/exports/export2 nfsshare ls nfsshare clientdatafile2 umount nfsshare", "mkdir nfsshare mount -o \"vers=4\" 192.168.122.200:export1 nfsshare ls nfsshare clientdatafile1", "pcs status Full list of resources: myapc (stonith:fence_apc_snmp): Started z1.example.com Resource Group: nfsgroup my_lvm (ocf::heartbeat:LVM-activate): Started z1.example.com nfsshare (ocf::heartbeat:Filesystem): Started z1.example.com nfs-daemon (ocf::heartbeat:nfsserver): Started z1.example.com nfs-root (ocf::heartbeat:exportfs): Started z1.example.com nfs-export1 (ocf::heartbeat:exportfs): Started z1.example.com nfs-export2 (ocf::heartbeat:exportfs): Started z1.example.com nfs_ip (ocf::heartbeat:IPaddr2): Started z1.example.com nfs-notify (ocf::heartbeat:nfsnotify): Started z1.example.com", "pcs node standby z1.example.com", "pcs status Full list of resources: Resource Group: nfsgroup my_lvm (ocf::heartbeat:LVM-activate): Started z2.example.com nfsshare (ocf::heartbeat:Filesystem): Started z2.example.com nfs-daemon (ocf::heartbeat:nfsserver): Started z2.example.com nfs-root (ocf::heartbeat:exportfs): Started z2.example.com nfs-export1 (ocf::heartbeat:exportfs): Started z2.example.com nfs-export2 (ocf::heartbeat:exportfs): Started z2.example.com nfs_ip (ocf::heartbeat:IPaddr2): Started z2.example.com nfs-notify (ocf::heartbeat:nfsnotify): Started z2.example.com", "ls nfsshare clientdatafile1", "pcs node unstandby z1.example.com", "subscription-manager repos --enable=rhel-8-for-x86_64-resilientstorage-rpms", "yum install lvm2-lockd gfs2-utils dlm", "use_lvmlockd = 1", "pcs property set no-quorum-policy=freeze", "pcs resource create dlm --group locking ocf:pacemaker:controld op monitor interval=30s on-fail=fence", "pcs resource clone locking interleave=true", "pcs resource create lvmlockd --group locking ocf:heartbeat:lvmlockd op monitor interval=30s on-fail=fence", "pcs status --full Cluster name: my_cluster [...] Online: [ z1.example.com (1) z2.example.com (2) ] Full list of resources: smoke-apc (stonith:fence_apc): Started z1.example.com Clone Set: locking-clone [locking] Resource Group: locking:0 dlm (ocf::pacemaker:controld): Started z1.example.com lvmlockd (ocf::heartbeat:lvmlockd): Started z1.example.com Resource Group: locking:1 dlm (ocf::pacemaker:controld): Started z2.example.com lvmlockd (ocf::heartbeat:lvmlockd): Started z2.example.com Started: [ z1.example.com z2.example.com ]", "vgcreate --shared shared_vg1 /dev/vdb Physical volume \"/dev/vdb\" successfully created. Volume group \"shared_vg1\" successfully created VG shared_vg1 starting dlm lockspace Starting locking. Waiting until locks are ready", "vgcreate --shared shared_vg2 /dev/vdc Physical volume \"/dev/vdc\" successfully created. Volume group \"shared_vg2\" successfully created VG shared_vg2 starting dlm lockspace Starting locking. Waiting until locks are ready", "lvmdevices --adddev /dev/vdb lvmdevices --adddev /dev/vdc", "vgchange --lockstart shared_vg1 VG shared_vg1 starting dlm lockspace Starting locking. Waiting until locks are ready vgchange --lockstart shared_vg2 VG shared_vg2 starting dlm lockspace Starting locking. Waiting until locks are ready", "lvcreate --activate sy -L5G -n shared_lv1 shared_vg1 Logical volume \"shared_lv1\" created. lvcreate --activate sy -L5G -n shared_lv2 shared_vg1 Logical volume \"shared_lv2\" created. lvcreate --activate sy -L5G -n shared_lv1 shared_vg2 Logical volume \"shared_lv1\" created. mkfs.gfs2 -j2 -p lock_dlm -t my_cluster:gfs2-demo1 /dev/shared_vg1/shared_lv1 mkfs.gfs2 -j2 -p lock_dlm -t my_cluster:gfs2-demo2 /dev/shared_vg1/shared_lv2 mkfs.gfs2 -j2 -p lock_dlm -t my_cluster:gfs2-demo3 /dev/shared_vg2/shared_lv1", "pcs resource create sharedlv1 --group shared_vg1 ocf:heartbeat:LVM-activate lvname=shared_lv1 vgname=shared_vg1 activation_mode=shared vg_access_mode=lvmlockd", "pcs resource create sharedlv2 --group shared_vg1 ocf:heartbeat:LVM-activate lvname=shared_lv2 vgname=shared_vg1 activation_mode=shared vg_access_mode=lvmlockd", "pcs resource create sharedlv3 --group shared_vg2 ocf:heartbeat:LVM-activate lvname=shared_lv1 vgname=shared_vg2 activation_mode=shared vg_access_mode=lvmlockd", "pcs resource clone shared_vg1 interleave=true pcs resource clone shared_vg2 interleave=true", "pcs constraint order start locking-clone then shared_vg1-clone Adding locking-clone shared_vg1-clone (kind: Mandatory) (Options: first-action=start then-action=start) pcs constraint order start locking-clone then shared_vg2-clone Adding locking-clone shared_vg2-clone (kind: Mandatory) (Options: first-action=start then-action=start)", "pcs constraint colocation add shared_vg1-clone with locking-clone pcs constraint colocation add shared_vg2-clone with locking-clone", "lvs LV VG Attr LSize shared_lv1 shared_vg1 -wi-a----- 5.00g shared_lv2 shared_vg1 -wi-a----- 5.00g shared_lv1 shared_vg2 -wi-a----- 5.00g lvs LV VG Attr LSize shared_lv1 shared_vg1 -wi-a----- 5.00g shared_lv2 shared_vg1 -wi-a----- 5.00g shared_lv1 shared_vg2 -wi-a----- 5.00g", "pcs resource create sharedfs1 --group shared_vg1 ocf:heartbeat:Filesystem device=\"/dev/shared_vg1/shared_lv1\" directory=\"/mnt/gfs1\" fstype=\"gfs2\" options=noatime op monitor interval=10s on-fail=fence pcs resource create sharedfs2 --group shared_vg1 ocf:heartbeat:Filesystem device=\"/dev/shared_vg1/shared_lv2\" directory=\"/mnt/gfs2\" fstype=\"gfs2\" options=noatime op monitor interval=10s on-fail=fence pcs resource create sharedfs3 --group shared_vg2 ocf:heartbeat:Filesystem device=\"/dev/shared_vg2/shared_lv1\" directory=\"/mnt/gfs3\" fstype=\"gfs2\" options=noatime op monitor interval=10s on-fail=fence", "mount | grep gfs2 /dev/mapper/shared_vg1-shared_lv1 on /mnt/gfs1 type gfs2 (rw,noatime,seclabel) /dev/mapper/shared_vg1-shared_lv2 on /mnt/gfs2 type gfs2 (rw,noatime,seclabel) /dev/mapper/shared_vg2-shared_lv1 on /mnt/gfs3 type gfs2 (rw,noatime,seclabel) mount | grep gfs2 /dev/mapper/shared_vg1-shared_lv1 on /mnt/gfs1 type gfs2 (rw,noatime,seclabel) /dev/mapper/shared_vg1-shared_lv2 on /mnt/gfs2 type gfs2 (rw,noatime,seclabel) /dev/mapper/shared_vg2-shared_lv1 on /mnt/gfs3 type gfs2 (rw,noatime,seclabel)", "pcs status --full Cluster name: my_cluster [...] Full list of resources: smoke-apc (stonith:fence_apc): Started z1.example.com Clone Set: locking-clone [locking] Resource Group: locking:0 dlm (ocf::pacemaker:controld): Started z2.example.com lvmlockd (ocf::heartbeat:lvmlockd): Started z2.example.com Resource Group: locking:1 dlm (ocf::pacemaker:controld): Started z1.example.com lvmlockd (ocf::heartbeat:lvmlockd): Started z1.example.com Started: [ z1.example.com z2.example.com ] Clone Set: shared_vg1-clone [shared_vg1] Resource Group: shared_vg1:0 sharedlv1 (ocf::heartbeat:LVM-activate): Started z2.example.com sharedlv2 (ocf::heartbeat:LVM-activate): Started z2.example.com sharedfs1 (ocf::heartbeat:Filesystem): Started z2.example.com sharedfs2 (ocf::heartbeat:Filesystem): Started z2.example.com Resource Group: shared_vg1:1 sharedlv1 (ocf::heartbeat:LVM-activate): Started z1.example.com sharedlv2 (ocf::heartbeat:LVM-activate): Started z1.example.com sharedfs1 (ocf::heartbeat:Filesystem): Started z1.example.com sharedfs2 (ocf::heartbeat:Filesystem): Started z1.example.com Started: [ z1.example.com z2.example.com ] Clone Set: shared_vg2-clone [shared_vg2] Resource Group: shared_vg2:0 sharedlv3 (ocf::heartbeat:LVM-activate): Started z2.example.com sharedfs3 (ocf::heartbeat:Filesystem): Started z2.example.com Resource Group: shared_vg2:1 sharedlv3 (ocf::heartbeat:LVM-activate): Started z1.example.com sharedfs3 (ocf::heartbeat:Filesystem): Started z1.example.com Started: [ z1.example.com z2.example.com ]", "subscription-manager repos --enable=rhel-8-for-x86_64-resilientstorage-rpms", "yum install lvm2-lockd gfs2-utils dlm", "use_lvmlockd = 1", "pcs property set no-quorum-policy=freeze", "pcs resource create dlm --group locking ocf:pacemaker:controld op monitor interval=30s on-fail=fence", "pcs resource clone locking interleave=true", "pcs resource create lvmlockd --group locking ocf:heartbeat:lvmlockd op monitor interval=30s on-fail=fence", "pcs status --full Cluster name: my_cluster [...] Online: [ z1.example.com (1) z2.example.com (2) ] Full list of resources: smoke-apc (stonith:fence_apc): Started z1.example.com Clone Set: locking-clone [locking] Resource Group: locking:0 dlm (ocf::pacemaker:controld): Started z1.example.com lvmlockd (ocf::heartbeat:lvmlockd): Started z1.example.com Resource Group: locking:1 dlm (ocf::pacemaker:controld): Started z2.example.com lvmlockd (ocf::heartbeat:lvmlockd): Started z2.example.com Started: [ z1.example.com z2.example.com ]", "vgcreate --shared shared_vg1 /dev/sda1 Physical volume \"/dev/sda1\" successfully created. Volume group \"shared_vg1\" successfully created VG shared_vg1 starting dlm lockspace Starting locking. Waiting until locks are ready", "lvmdevices --adddev /dev/sda1", "vgchange --lockstart shared_vg1 VG shared_vg1 starting dlm lockspace Starting locking. Waiting until locks are ready", "lvcreate --activate sy -L5G -n shared_lv1 shared_vg1 Logical volume \"shared_lv1\" created.", "pcs resource create sharedlv1 --group shared_vg1 ocf:heartbeat:LVM-activate lvname=shared_lv1 vgname=shared_vg1 activation_mode=shared vg_access_mode=lvmlockd", "pcs resource clone shared_vg1 interleave=true", "pcs constraint order start locking-clone then shared_vg1-clone Adding locking-clone shared_vg1-clone (kind: Mandatory) (Options: first-action=start then-action=start)", "pcs constraint colocation add shared_vg1-clone with locking-clone", "lvs LV VG Attr LSize shared_lv1 shared_vg1 -wi-a----- 5.00g lvs LV VG Attr LSize shared_lv1 shared_vg1 -wi-a----- 5.00g", "touch /etc/crypt_keyfile chmod 600 /etc/crypt_keyfile", "dd if=/dev/urandom bs=4K count=1 of=/etc/crypt_keyfile 1+0 records in 1+0 records out 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000306202 s, 13.4 MB/s scp /etc/crypt_keyfile [email protected]:/etc/", "scp -p /etc/crypt_keyfile [email protected]:/etc/", "cryptsetup luksFormat /dev/shared_vg1/shared_lv1 --type luks2 --key-file=/etc/crypt_keyfile WARNING! ======== This will overwrite data on /dev/shared_vg1/shared_lv1 irrevocably. Are you sure? (Type 'yes' in capital letters): YES", "pcs resource create crypt --group shared_vg1 ocf:heartbeat:crypt crypt_dev=\"luks_lv1\" crypt_type=luks2 key_file=/etc/crypt_keyfile encrypted_dev=\"/dev/shared_vg1/shared_lv1\"", "ls -l /dev/mapper/ lrwxrwxrwx 1 root root 7 Mar 4 09:52 luks_lv1 -> ../dm-3", "mkfs.gfs2 -j3 -p lock_dlm -t my_cluster:gfs2-demo1 /dev/mapper/luks_lv1 /dev/mapper/luks_lv1 is a symbolic link to /dev/dm-3 This will destroy any data on /dev/dm-3 Are you sure you want to proceed? [y/n] y Discarding device contents (may take a while on large devices): Done Adding journals: Done Building resource groups: Done Creating quota file: Done Writing superblock and syncing: Done Device: /dev/mapper/luks_lv1 Block size: 4096 Device size: 4.98 GB (1306624 blocks) Filesystem size: 4.98 GB (1306622 blocks) Journals: 3 Journal size: 16MB Resource groups: 23 Locking protocol: \"lock_dlm\" Lock table: \"my_cluster:gfs2-demo1\" UUID: de263f7b-0f12-4d02-bbb2-56642fade293", "pcs resource create sharedfs1 --group shared_vg1 ocf:heartbeat:Filesystem device=\"/dev/mapper/luks_lv1\" directory=\"/mnt/gfs1\" fstype=\"gfs2\" options=noatime op monitor interval=10s on-fail=fence", "mount | grep gfs2 /dev/mapper/luks_lv1 on /mnt/gfs1 type gfs2 (rw,noatime,seclabel) mount | grep gfs2 /dev/mapper/luks_lv1 on /mnt/gfs1 type gfs2 (rw,noatime,seclabel)", "pcs status --full Cluster name: my_cluster [...] Full list of resources: smoke-apc (stonith:fence_apc): Started z1.example.com Clone Set: locking-clone [locking] Resource Group: locking:0 dlm (ocf::pacemaker:controld): Started z2.example.com lvmlockd (ocf::heartbeat:lvmlockd): Started z2.example.com Resource Group: locking:1 dlm (ocf::pacemaker:controld): Started z1.example.com lvmlockd (ocf::heartbeat:lvmlockd): Started z1.example.com Started: [ z1.example.com z2.example.com ] Clone Set: shared_vg1-clone [shared_vg1] Resource Group: shared_vg1:0 sharedlv1 (ocf::heartbeat:LVM-activate): Started z2.example.com crypt (ocf::heartbeat:crypt) Started z2.example.com sharedfs1 (ocf::heartbeat:Filesystem): Started z2.example.com Resource Group: shared_vg1:1 sharedlv1 (ocf::heartbeat:LVM-activate): Started z1.example.com crypt (ocf::heartbeat:crypt) Started z1.example.com sharedfs1 (ocf::heartbeat:Filesystem): Started z1.example.com Started: [z1.example.com z2.example.com ]", "vgchange --lock-type none --lock-opt force upgrade_gfs_vg Forcibly change VG lock type to none? [y/n]: y Volume group \"upgrade_gfs_vg\" successfully changed", "vgchange --lock-type dlm upgrade_gfs_vg Volume group \"upgrade_gfs_vg\" successfully changed", "vgchange --lockstart upgrade_gfs_vg VG upgrade_gfs_vg starting dlm lockspace Starting locking. Waiting until locks are ready vgchange --lockstart upgrade_gfs_vg VG upgrade_gfs_vg starting dlm lockspace Starting locking. Waiting until locks are ready", "subscription-manager repos --enable=rhel-8-for-x86_64-resilientstorage-rpms", "yum install lvm2-lockd gfs2-utils dlm", "use_lvmlockd = 1", "pcs property set no-quorum-policy=freeze", "pcs resource create dlm --group locking ocf:pacemaker:controld op monitor interval=30s on-fail=fence", "pcs resource clone locking interleave=true", "pcs resource create lvmlockd --group locking ocf:heartbeat:lvmlockd op monitor interval=30s on-fail=fence", "pvcreate /dev/vdb vgcreate -Ay --shared csmb_vg /dev/vdb Volume group \"csmb_vg\" successfully created VG csmb_vg starting dlm lockspace Starting locking. Waiting until locks are ready", "lvmdevices --adddev /dev/vdb", "vgchange --lockstart csmb_vg VG csmb_vg starting dlm lockspace Starting locking. Waiting until locks are ready", "lvcreate -L1G -n ctdb_lv csmb_vg mkfs.gfs2 -j3 -p lock_dlm -t my_cluster:ctdb /dev/csmb_vg/ctdb_lv", "lvcreate -L50G -n csmb_lv1 csmb_vg mkfs.gfs2 -j3 -p lock_dlm -t my_cluster:csmb1 /dev/csmb_vg/csmb_lv1", "pcs resource create --disabled --group shared_vg ctdb_lv ocf:heartbeat:LVM-activate lvname=ctdb_lv vgname=csmb_vg activation_mode=shared vg_access_mode=lvmlockd pcs resource create --disabled --group shared_vg csmb_lv1 ocf:heartbeat:LVM-activate lvname=csmb_lv1 vgname=csmb_vg activation_mode=shared vg_access_mode=lvmlockd pcs resource clone shared_vg interleave=true", "pcs constraint order start locking-clone then shared_vg-clone Adding locking-clone shared_vg-clone (kind: Mandatory) (Options: first-action=start then-action=start)", "pcs resource enable ctdb_lv csmb_lv1", "pcs resource create ctdb_fs Filesystem device=\"/dev/csmb_vg/ctdb_lv\" directory=\"/mnt/ctdb\" fstype=\"gfs2\" op monitor interval=10s on-fail=fence clone interleave=true pcs resource create csmb_fs1 Filesystem device=\"/dev/csmb_vg/csmb_lv1\" directory=\"/srv/samba/share1\" fstype=\"gfs2\" op monitor interval=10s on-fail=fence clone interleave=true", "pcs constraint order start shared_vg-clone then ctdb_fs-clone Adding shared_vg-clone ctdb_fs-clone (kind: Mandatory) (Options: first-action=start then-action=start) pcs constraint order start shared_vg-clone then csmb_fs1-clone Adding shared_vg-clone csmb_fs1-clone (kind: Mandatory) (Options: first-action=start then-action=start)", "dnf -y install samba ctdb cifs-utils samba-winbind", "systemctl disable --now ctdb smb nmb winbind", "[global] netbios name = linuxserver workgroup = WORKGROUP security = user clustering = yes [share1] path = /srv/samba/share1 read only = no", "testparm", "192.0.2.11 192.0.2.12", "192.0.2.201/24 enp1s0 192.0.2.202/24 enp1s0", "firewall-cmd --add-service=ctdb --add-service=samba --permanent firewall-cmd --reload", "semanage fcontext -at ctdbd_var_run_t -s system_u \"/mnt/ctdb(/. )?\" restorecon -Rv /mnt/ctdb", "semanage fcontext -at samba_share_t -s system_u \"/srv/samba/share1(/. )?\" restorecon -Rv /srv/samba/share1", "pcs resource create --disabled ctdb --group samba-group ocf:heartbeat:CTDB ctdb_recovery_lock=/mnt/ctdb/ctdb.lock ctdb_dbdir=/var/lib/ctdb ctdb_logfile=/var/log/ctdb.log op monitor interval=10 timeout=30 op start timeout=90 op stop timeout=100", "pcs resource clone samba-group", "pcs constraint order start ctdb_fs-clone then samba-group-clone pcs constraint order start csmb_fs1-clone then samba-group-clone", "pcs resource create samba --group samba-group systemd:smb", "pcs resource enable ctdb samba", "pcs status Full List of Resources: * fence-z1 (stonith:fence_xvm): Started z1.example.com * fence-z2 (stonith:fence_xvm): Started z2.example.com * Clone Set: locking-clone [locking]: * Started: [ z1.example.com z2.example.com ] * Clone Set: shared_vg-clone [shared_vg]: * Started: [ z1.example.com z2.example.com ] * Clone Set: ctdb_fs-clone [ctdb_fs]: * Started: [ z1.example.com z2.example.com ] * Clone Set: csmb_fs1-clone [csmb_fs1]: * Started: [ z1.example.com z2.example.com ] * Clone Set: samba-group-clone [samba-group]: * Started: [ z1.example.com z2.example.com ]", "useradd -M -s /sbin/nologin example_user", "passwd example_user", "smbpasswd -a example_user New SMB password: Retype new SMB password: Added user example_user", "smbpasswd -e example_user", "chown example_user:users /srv/samba/share1/ chmod 755 /srv/samba/share1/", "mkdir /mnt/sambashare mount -t cifs -o user=example_user //192.0.2.201/share1 /mnt/sambashare Password for example_user@//192.0.2.201/public: XXXXXXX", "mount | grep /mnt/sambashare //192.0.2.201/public on /mnt/sambashare type cifs (rw,relatime,vers=1.0,cache=strict,username=example_user,domain=LINUXSERVER,uid=0,noforceuid,gid=0,noforcegid,addr=192.0.2.201,unix,posixpaths,serverino,mapposix,acl,rsize=1048576,wsize=65536,echo_interval=60,actimeo=1,user=example_user)", "touch /mnt/sambashare/testfile1 ls /mnt/sambashare testfile1", "ip -4 addr show enp1s0 | grep inet inet 192.0.2.11/24 brd 192.0.2.255 scope global dynamic noprefixroute enp1s0 inet 192.0.2.201/24 brd 192.0.2.255 scope global secondary enp1s0 ip -4 addr show enp1s0 | grep inet inet 192.0.2.12/24 brd 192.0.2.255 scope global dynamic noprefixroute enp1s0 inet 192.0.2.202/24 brd 192.0.2.255 scope global secondary enp1s0", "pcs node standby z1.example.com", "touch /mnt/sambashare/testfile2 ls /mnt/sambashare testfile1 testfile2", "rm /mnt/sambashare/testfile1 /mnt/sambashare/testfile2 rm: remove regular empty file '/mnt/sambashare/testfile1'? y rm: remove regular empty file '/mnt/sambashare/testfile1'? y umount /mnt/sambashare", "pcs node unstandby z1.example.com", "subscription-manager repos --enable=rhel-8-for-x86_64-highavailability-rpms", "yum install pcs pacemaker fence-agents-all", "yum install pcs pacemaker fence-agents- model", "rpm -q -a | grep fence fence-agents-rhevm-4.0.2-3.el7.x86_64 fence-agents-ilo-mp-4.0.2-3.el7.x86_64 fence-agents-ipmilan-4.0.2-3.el7.x86_64", "firewall-cmd --permanent --add-service=high-availability firewall-cmd --add-service=high-availability", "passwd hacluster Changing password for user hacluster. New password: Retype new password: passwd: all authentication tokens updated successfully.", "systemctl start pcsd.service systemctl enable pcsd.service", "https:// nodename :2224", "pcs stonith list [ filter ]", "pcs stonith describe [ stonith_agent ]", "pcs stonith describe fence_apc Stonith options for: fence_apc ipaddr (required): IP Address or Hostname login (required): Login Name passwd: Login password or passphrase passwd_script: Script to retrieve password cmd_prompt: Force command prompt secure: SSH connection port (required): Physical plug number or name of virtual machine identity_file: Identity file for ssh switch: Physical switch number on device inet4_only: Forces agent to use IPv4 addresses only inet6_only: Forces agent to use IPv6 addresses only ipport: TCP port to use for connection with device action (required): Fencing Action verbose: Verbose mode debug: Write debug information to given file version: Display version information and exit help: Display help and exit separator: Separator for CSV created by operation list power_timeout: Test X seconds for status change after ON/OFF shell_timeout: Wait X seconds for cmd prompt after issuing command login_timeout: Wait X seconds for cmd prompt after login power_wait: Wait X seconds after issuing ON/OFF delay: Wait X seconds before fencing is started retry_on: Count of attempts to retry power on", "pcs stonith create stonith_id stonith_device_type [ stonith_device_options ] [op operation_action operation_options ]", "pcs stonith create MyStonith fence_virt pcmk_host_list=f1 op monitor interval=30s", "fence_ipmilan -a ipaddress -l username -p password -o status", "fence_ipmilan -a ipaddress -l username -p password -o reboot", "fence_ipmilan -a ipaddress -l username -p password -o status -D /tmp/USD(hostname)-fence_agent.debug", "pcs stonith fence node_name", "firewall-cmd --direct --add-rule ipv4 filter OUTPUT 2 -p udp --dport=5405 -j DROP firewall-cmd --add-rich-rule='rule family=\"ipv4\" port port=\"5405\" protocol=\"udp\" drop", "echo c > /proc/sysrq-trigger", "pcs stonith level add level node devices", "pcs stonith level", "pcs stonith level add 1 rh7-2 my_ilo pcs stonith level add 2 rh7-2 my_apc pcs stonith level Node: rh7-2 Level 1 - my_ilo Level 2 - my_apc", "pcs stonith level remove level [ node_id ] [ stonith_id ] ... [ stonith_id ]", "pcs stonith level clear [ node ]| stonith_id (s)]", "pcs stonith level clear dev_a,dev_b", "pcs stonith level verify", "pcs stonith level add 1 \"regexp%node[1-3]\" apc1,apc2 pcs stonith level add 1 \"regexp%node[4-6]\" apc3,apc4", "pcs node attribute node1 rack=1 pcs node attribute node2 rack=1 pcs node attribute node3 rack=1 pcs node attribute node4 rack=2 pcs node attribute node5 rack=2 pcs node attribute node6 rack=2 pcs stonith level add 1 attrib%rack=1 apc1,apc2 pcs stonith level add 1 attrib%rack=2 apc3,apc4", "pcs stonith create apc1 fence_apc_snmp ipaddr=apc1.example.com login=user passwd='7a4D#1j!pz864' pcmk_host_map=\"node1.example.com:1;node2.example.com:2\" pcs stonith create apc2 fence_apc_snmp ipaddr=apc2.example.com login=user passwd='7a4D#1j!pz864' pcmk_host_map=\"node1.example.com:1;node2.example.com:2\" pcs stonith level add 1 node1.example.com apc1,apc2 pcs stonith level add 1 node2.example.com apc1,apc2", "pcs stonith config [ stonith_id ] [--full]", "pcs stonith create myapc fence_apc_snmp ip=\"zapc.example.com\" pcmk_host_map=\"z1.example.com:1;z2.example.com:2\" username=\"apc\" password=\"apc\" pcs stonith config --output-format=cmd Warning: Only 'text' output format is supported for stonith levels pcs stonith create --no-default-ops --force -- myapc fence_apc_snmp ip=zapc.example.com password=apc 'pcmk_host_map=z1.example.com:1;z2.example.com:2' username=apc op monitor interval=60s id=myapc-monitor-interval-60s", "pcs stonith update stonith_id [ stonith_device_options ]", "pcs stonith update-scsi-devices stonith_id set device-path1 device-path2 pcs stonith update-scsi-devices stonith_id add device-path1 remove device-path2", "pcs stonith delete stonith_id", "pcs stonith fence node [--off]", "pcs stonith confirm node", "pcs stonith disable myapc", "pcs constraint location node1-ipmi avoids node1", "`Soft-Off by PWR-BTTN` set to `Instant-Off`", "+---------------------------------------------|-------------------+ | ACPI Function [Enabled] | Item Help | | ACPI Suspend Type [S1(POS)] |-------------------| | x Run VGABIOS if S3 Resume Auto | Menu Level * | | Suspend Mode [Disabled] | | | HDD Power Down [Disabled] | | | Soft-Off by PWR-BTTN [Instant-Off | | | CPU THRM-Throttling [50.0%] | | | Wake-Up by PCI card [Enabled] | | | Power On by Ring [Enabled] | | | Wake Up On LAN [Enabled] | | | x USB KB Wake-Up From S3 Disabled | | | Resume by Alarm [Disabled] | | | x Date(of Month) Alarm 0 | | | x Time(hh:mm:ss) Alarm 0 : 0 : | | | POWER ON Function [BUTTON ONLY | | | x KB Power ON Password Enter | | | x Hot Key Power ON Ctrl-F1 | | | | | | | | +---------------------------------------------|-------------------+", "HandlePowerKey=ignore", "systemctl restart systemd-logind.service", "grubby --args=acpi=off --update-kernel=ALL", "pcs resource create resource_id [ standard :[ provider :]] type [ resource_options ] [op operation_action operation_options [ operation_action operation options ]...] [meta meta_options ...] [clone [ clone_options ] | master [ master_options ] [--wait[= n ]]", "pcs resource create VirtualIP ocf:heartbeat:IPaddr2 ip=192.168.0.120 cidr_netmask=24 op monitor interval=30s", "pcs resource create VirtualIP IPaddr2 ip=192.168.0.120 cidr_netmask=24 op monitor interval=30s", "pcs resource delete resource_id", "pcs resource delete VirtualIP", "pcs resource describe [ standard :[ provider :]] type", "pcs resource describe ocf:heartbeat:apache This is the resource agent for the Apache Web server. This resource agent operates both version 1.x and version 2.x Apache servers.", "pcs resource defaults update resource-stickiness=100", "pcs resource defaults set create id=pgsql-stickiness meta resource-stickiness=100 rule resource ::pgsql", "pcs resource defaults Meta Attrs: rsc_defaults-meta_attributes resource-stickiness=100", "pcs resource defaults Meta Attrs: pgsql-stickiness resource-stickiness=100 Rule: boolean-op=and score=INFINITY Expression: resource ::pgsql", "pcs resource create resource_id [ standard :[ provider :]] type [ resource options ] [meta meta_options ...]", "pcs resource create VirtualIP ocf:heartbeat:IPaddr2 ip=192.168.0.120 meta resource-stickiness=50", "pcs resource meta resource_id | group_id | clone_id meta_options", "pcs resource meta dummy_resource failure-timeout=20s", "pcs resource config dummy_resource Resource: dummy_resource (class=ocf provider=heartbeat type=Dummy) Meta Attrs: failure-timeout=20s", "pcs resource group add group_name resource_id [ resource_id ] ... [ resource_id ] [--before resource_id | --after resource_id ]", "pcs resource create resource_id [ standard :[ provider :]] type [resource_options] [op operation_action operation_options ] --group group_name", "pcs resource group add shortcut IPaddr Email", "pcs resource group remove group_name resource_id", "pcs resource group list", "pcs constraint location rsc prefers node [= score ] [ node [= score ]]", "pcs constraint location rsc avoids node [= score ] [ node [= score ]]", "pcs constraint location Webserver prefers node1", "pcs constraint location 'regexp%dummy[0-9]' prefers node1", "pcs constraint location 'regexp%dummy[[:digit:]]' prefers node1", "pcs constraint location add id rsc node score [resource-discovery= option ]", "pcs property set symmetric-cluster=false", "pcs constraint location Webserver prefers example-1=200 pcs constraint location Webserver prefers example-3=0 pcs constraint location Database prefers example-2=200 pcs constraint location Database prefers example-3=0", "pcs property set symmetric-cluster=true", "pcs constraint location Webserver prefers example-1=200 pcs constraint location Webserver avoids example-2=INFINITY pcs constraint location Database avoids example-1=INFINITY pcs constraint location Database prefers example-2=200", "pcs resource defaults update resource-stickiness=1", "pcs constraint order [ action ] resource_id then [ action ] resource_id [ options ]", "pcs constraint order remove resource1 [ resourceN ]", "pcs constraint order VirtualIP then dummy_resource kind=Optional", "pcs constraint order set resource1 resource2 [ resourceN ]... [ options ] [set resourceX resourceY ... [ options ]] [setoptions [ constraint_options ]]", "pcs constraint order set D1 D2 D3", "pcs constraint order set A B sequential=false require-all=false set C D set E F sequential=false setoptions symmetrical=false", "[Unit] Requires=foo.service After=foo.service", "[Unit] Requires=srv.mount After=srv.mount", "[Unit] Requires=blk-availability.service After=blk-availability.service", "pcs constraint colocation add [master|slave] source_resource with [master|slave] target_resource [ score ] [ options ]", "pcs constraint colocation add myresource1 with myresource2 score=INFINITY", "pcs constraint colocation add myresource1 with myresource2 score=-INFINITY", "pcs constraint colocation set resource1 resource2 ] [ resourceN ]... [ options ] [set resourceX resourceY ] ... [ options ]] [setoptions [ constraint_options ]]", "pcs constraint colocation remove source_resource target_resource", "pcs constraint [list|show] [--full]", "pcs constraint location [show [resources [ resource ...]] | [nodes [ node ...]]] [--full]", "pcs constraint order [show]", "pcs constraint colocation [show]", "pcs constraint ref resource", "pcs resource relations resource [--full]", "pcs constraint order start C then start D Adding C D (kind: Mandatory) (Options: first-action=start then-action=start) pcs constraint order start D then start E Adding D E (kind: Mandatory) (Options: first-action=start then-action=start) pcs resource relations C C `- order | start C then start D `- D `- order | start D then start E `- E pcs resource relations D D |- order | | start C then start D | `- C `- order | start D then start E `- E pcs resource relations E E `- order | start D then start E `- D `- order | start C then start D `- C", "pcs resource relations A A `- outer resource `- G `- inner resource(s) | members: A B `- B pcs resource relations B B `- outer resource `- G `- inner resource(s) | members: A B `- A pcs resource relations G G `- inner resource(s) | members: A B |- A `- B", "pcs node attribute node1 rack=1 pcs node attribute node2 rack=2", "pcs constraint location rsc rule [resource-discovery= option ] [role=master|slave] [score= score | score-attribute= attribute ] expression", "pcs constraint location Webserver rule score=INFINITY date-spec years=2018", "pcs constraint location Webserver rule score=INFINITY date-spec hours=\"9-16\" weekdays=\"1-5\"", "pcs constraint location Webserver rule date-spec weekdays=5 monthdays=13 moon=4", "pcs constraint rule remove rule_id", "pcs resource status", "pcs resource status VirtualIP (ocf::heartbeat:IPaddr2): Started WebSite (ocf::heartbeat:apache): Started", "pcs resource config resource_id", "pcs resource config VirtualIP Resource: VirtualIP (type=IPaddr2 class=ocf provider=heartbeat) Attributes: ip=192.168.0.120 cidr_netmask=24 Operations: monitor interval=30s", "pcs resource status resource_id", "pcs resource status VirtualIP VirtualIP (ocf::heartbeat:IPaddr2): Started", "pcs resource status node= node_id", "pcs resource status node=node-01 VirtualIP (ocf::heartbeat:IPaddr2): Started WebSite (ocf::heartbeat:apache): Started", "pcs resource create my_lvm ocf:heartbeat:LVM-activate vgname=my_vg vg_access_mode=system_id --group apachegroup pcs resource create my_fs Filesystem device=\"/dev/my_vg/my_lv\" directory=\"/var/www\" fstype=\"xfs\" --group apachegroup pcs resource create VirtualIP IPaddr2 ip=198.51.100.3 cidr_netmask=24 --group apachegroup pcs resource create Website apache configfile=\"/etc/httpd/conf/httpd.conf\" statusurl=\"http://127.0.0.1/server-status\" --group apachegroup", "pcs resource config --output-format=cmd pcs resource create --no-default-ops --force -- my_lvm ocf:heartbeat:LVM-activate vg_access_mode=system_id vgname=my_vg op monitor interval=30s id=my_lvm-monitor-interval-30s timeout=90s start interval=0s id=my_lvm-start-interval-0s timeout=90s stop interval=0s id=my_lvm-stop-interval-0s timeout=90s; pcs resource create --no-default-ops --force -- my_fs ocf:heartbeat:Filesystem device=/dev/my_vg/my_lv directory=/var/www fstype=xfs op monitor interval=20s id=my_fs-monitor-interval-20s timeout=40s start interval=0s id=my_fs-start-interval-0s timeout=60s stop interval=0s id=my_fs-stop-interval-0s timeout=60s; pcs resource create --no-default-ops --force -- VirtualIP ocf:heartbeat:IPaddr2 cidr_netmask=24 ip=198.51.100.3 op monitor interval=10s id=VirtualIP-monitor-interval-10s timeout=20s start interval=0s id=VirtualIP-start-interval-0s timeout=20s stop interval=0s id=VirtualIP-stop-interval-0s timeout=20s; pcs resource create --no-default-ops --force -- Website ocf:heartbeat:apache configfile=/etc/httpd/conf/httpd.conf statusurl=http://127.0.0.1/server-status op monitor interval=10s id=Website-monitor-interval-10s timeout=20s start interval=0s id=Website-start-interval-0s timeout=40s stop interval=0s id=Website-stop-interval-0s timeout=60s; pcs resource group add apachegroup my_lvm my_fs VirtualIP Website", "pcs resource config VirtualIP --output-format=cmd pcs resource create --no-default-ops --force -- VirtualIP ocf:heartbeat:IPaddr2 cidr_netmask=24 ip=198.51.100.3 op monitor interval=10s id=VirtualIP-monitor-interval-10s timeout=20s start interval=0s id=VirtualIP-start-interval-0s timeout=20s stop interval=0s id=VirtualIP-stop-interval-0s timeout=20s", "pcs resource update resource_id [ resource_options ]", "pcs resource config VirtualIP Resource: VirtualIP (type=IPaddr2 class=ocf provider=heartbeat) Attributes: ip=192.168.0.120 cidr_netmask=24 Operations: monitor interval=30s pcs resource update VirtualIP ip=192.169.0.120 pcs resource config VirtualIP Resource: VirtualIP (type=IPaddr2 class=ocf provider=heartbeat) Attributes: ip=192.169.0.120 cidr_netmask=24 Operations: monitor interval=30s", "pcs resource cleanup resource_id", "pcs resource meta dummy_resource migration-threshold=10", "pcs resource defaults update migration-threshold=10", "pcs resource create ping ocf:pacemaker:ping dampen=5s multiplier=1000 host_list=gateway.example.com clone", "pcs constraint location Webserver rule score=-INFINITY pingd lt 1 or not_defined pingd", "pcs resource update resourceXZY op monitor enabled=false pcs resource update resourceXZY op monitor enabled=true", "pcs resource update resourceXZY op monitor timeout=600 enabled=true", "pcs tag create special-resources d-01 d-02", "pcs tag config special-resources d-01 d-02", "pcs resource disable special-resources", "pcs resource * d-01 (ocf::pacemaker:Dummy): Stopped (disabled) * d-02 (ocf::pacemaker:Dummy): Stopped (disabled)", "pcs tag remove special-resources pcs tag No tags defined", "pcs tag update special-resources remove d-01", "pcs resource delete d-01 Attempting to stop: d-01... Stopped", "pcs resource create resource_id [ standard :[ provider :]] type [ resource options ] [meta resource meta options ] clone [ clone_id ] [ clone options ]", "pcs resource create resource_id [ standard :[ provider :]] type [ resource options ] [meta resource meta options ] clone [ clone options ]", "pcs resource clone resource_id | group_id [ clone_id ][ clone options ]", "pcs resource clone resource_id | group_id [ clone options ]", "pcs resource create webfarm apache clone", "pcs resource unclone resource_id | clone_id | group_name", "pcs constraint location webfarm-clone prefers node1", "pcs constraint order start webfarm-clone then webfarm-stats", "pcs constraint colocation add webfarm-stats with webfarm-clone", "pcs resource create resource_id [ standard :[ provider :]] type [ resource options ] promotable [ clone_id ] [ clone options ]", "pcs resource create resource_id [ standard :[ provider :]] type [ resource options ] promotable [ clone options ]", "pcs resource promotable resource_id [ clone_id ] [ clone options ]", "pcs resource promotable resource_id [ clone options ]", "pcs constraint colocation add [master|slave] source_resource with [master|slave] target_resource [ score ] [ options ]", "pcs constraint order [ action ] resource_id then [ action ] resource_id [ options ]", "pcs resource op add my-rsc promote on-fail=\"demote\"", "pcs resource op add my-rsc monitor interval=\"10s\" on-fail=\"demote\" role=\"Master\"", "pcs cluster stop [--all | node ] [...]", "pcs cluster kill", "pcs cluster enable [--all | node ] [...]", "pcs cluster disable [--all | node ] [...]", "yum install -y pcs fence-agents-all", "firewall-cmd --permanent --add-service=high-availability firewall-cmd --add-service=high-availability", "passwd hacluster Changing password for user hacluster. New password: Retype new password: passwd: all authentication tokens updated successfully.", "systemctl start pcsd.service systemctl enable pcsd.service", "pcs host auth newnode.example.com Username: hacluster Password: newnode.example.com: Authorized", "pcs cluster node add newnode.example.com", "pcs cluster start Starting Cluster pcs cluster enable", "pcs cluster node remove node", "pcs cluster node add rh80-node3 addr=192.168.122.203 addr=192.168.123.203", "pcs cluster link add node1=10.0.5.11 node2=10.0.5.12 node3=10.0.5.31 options linknumber=5", "pcs cluster link delete 5 pcs cluster link remove 5", "pcs cluster link remove 2", "pcs cluster link add node1=10.0.5.11 node2=10.0.5.12 node3=10.0.5.31 options linknumber=2", "pcs cluster link add node1=10.0.5.11 node2=10.0.5.12 node3=10.0.5.31 options linknumber=2", "pcs cluster link remove 1", "pcs cluster link add node1=10.0.5.13 node2=10.0.5.14 options linknumber=2", "pcs cluster link remove 1", "pcs cluster link add node1=10.0.5.11 node2=10.0.5.31 options linknumber=1", "pcs cluster link remove 2", "pcs cluster link add node1=10.0.5.13 node2=10.0.5.14 options linknumber=2", "pcs cluster link remove 1", "pcs cluster link add node1=10.0.5.11 node2=10.0.5.12 options linknumber=1 link_priority=11", "pcs cluster link remove 2", "pcs cluster stop --all", "pcs cluster link update 1 node1=10.0.5.11 node3=10.0.5.31 options link_priority=11", "pcs cluster start --all", "pcs property set node-health-strategy=migrate-on-red", "pcs resource create io-monitor ocf:pacemaker:HealthIOWait red_limit=15 op monitor interval=10s meta allow-unhealthy-nodes=true clone", "pcs property set cluster-ipc-limit=2000", "Compressed message exceeds X % of configured IPC limit ( X bytes); consider setting PCMK_ipc_buffer to X or higher", "PCMK_ipc_buffer=13396332", "systemctl restart pacemaker", "adduser rouser usermod -a -G haclient rouser", "pcs acl enable", "pcs acl role create read-only description=\"Read access to cluster\" read xpath /cib", "pcs acl user create rouser read-only", "pcs acl User: rouser Roles: read-only Role: read-only Description: Read access to cluster Permission: read xpath /cib (read-only-read)", "[rouser ~]USD pcs client local-auth", "pcs resource create resource_id standard:provider:type|type [ resource_options ] [op operation_action operation_options [ operation_type operation_options ]...]", "pcs resource create VirtualIP ocf:heartbeat:IPaddr2 ip=192.168.0.99 cidr_netmask=24 nic=eth2 op monitor interval=30s", "pcs resource op add resource_id operation_action [ operation_properties ]", "pcs resource op remove resource_id operation_name operation_properties", "pcs resource create VirtualIP ocf:heartbeat:IPaddr2 ip=192.168.0.99 cidr_netmask=24 nic=eth2", "Operations: start interval=0s timeout=20s (VirtualIP-start-timeout-20s) stop interval=0s timeout=20s (VirtualIP-stop-timeout-20s) monitor interval=10s timeout=20s (VirtualIP-monitor-interval-10s)", "pcs resource update VirtualIP op stop interval=0s timeout=40s pcs resource config VirtualIP Resource: VirtualIP (class=ocf provider=heartbeat type=IPaddr2) Attributes: ip=192.168.0.99 cidr_netmask=24 nic=eth2 Operations: start interval=0s timeout=20s (VirtualIP-start-timeout-20s) monitor interval=10s timeout=20s (VirtualIP-monitor-interval-10s) stop interval=0s timeout=40s (VirtualIP-name-stop-interval-0s-timeout-40s)", "pcs resource op defaults update timeout=240s", "pcs resource update VirtualIP op monitor interval=10s", "pcs resource config VirtualIP Resource: VirtualIP (class=ocf provider=heartbeat type=IPaddr2) Attributes: ip=192.168.0.99 cidr_netmask=24 nic=eth2 Operations: start interval=0s timeout=20s (VirtualIP-start-timeout-20s) monitor interval=10s (VirtualIP-monitor-interval-10s) stop interval=0s timeout=40s (VirtualIP-name-stop-interval-0s-timeout-40s)", "pcs resource op defaults set create id=podman-timeout meta timeout=90s rule resource ::podman", "pcs resource op defaults set create id=stop-timeout meta timeout=120s rule op stop", "pcs resource op defaults set create id=podman-stop-timeout meta timeout=120s rule resource ::podman and op stop", "pcs resource op defaults Meta Attrs: podman-timeout timeout=90s Rule: boolean-op=and score=INFINITY Expression: resource ::podman", "pcs resource op defaults Meta Attrs: podman-stop-timeout timeout=120s Rule: boolean-op=and score=INFINITY Expression: resource ::podman Expression: op stop", "pcs resource create VirtualIP ocf:heartbeat:IPaddr2 ip=192.168.0.99 cidr_netmask=24 nic=eth2", "pcs resource op add VirtualIP monitor interval=60s OCF_CHECK_LEVEL=10", "pcs property set property = value", "pcs property set symmetric-cluster=false", "pcs property unset property", "pcs property set symmetic-cluster=", "pcs property list", "pcs property list --all", "pcs property show property", "pcs property show cluster-infrastructure Cluster Properties: cluster-infrastructure: cman", "pcs property [list|show] --defaults", "pcs property set migration-limit=10", "pcs property config --output-format=cmd pcs property set --force -- migration-limit=10 placement-strategy=minimal", "pcs property set shutdown-lock=true pcs property list --all | grep shutdown-lock shutdown-lock: true shutdown-lock-limit: 0", "pcs status Full List of Resources: * first (ocf::pacemaker:Dummy): Started z3.example.com * second (ocf::pacemaker:Dummy): Started z2.example.com * third (ocf::pacemaker:Dummy): Started z1.example.com * fourth (ocf::pacemaker:Dummy): Started z2.example.com * fifth (ocf::pacemaker:Dummy): Started z1.example.com", "pcs cluster stop z1.example.com Stopping Cluster (pacemaker) Stopping Cluster (corosync)", "pcs status Node List: * Online: [ z2.example.com z3.example.com ] * OFFLINE: [ z1.example.com ] Full List of Resources: * first (ocf::pacemaker:Dummy): Started z3.example.com * second (ocf::pacemaker:Dummy): Started z2.example.com * third (ocf::pacemaker:Dummy): Stopped z1.example.com (LOCKED) * fourth (ocf::pacemaker:Dummy): Started z3.example.com * fifth (ocf::pacemaker:Dummy): Stopped z1.example.com (LOCKED)", "pcs cluster start z1.example.com Starting Cluster", "pcs status Node List: * Online: [ z1.example.com z2.example.com z3.example.com ] Full List of Resources: .. * first (ocf::pacemaker:Dummy): Started z3.example.com * second (ocf::pacemaker:Dummy): Started z2.example.com * third (ocf::pacemaker:Dummy): Started z1.example.com * fourth (ocf::pacemaker:Dummy): Started z3.example.com * fifth (ocf::pacemaker:Dummy): Started z1.example.com", "pcs node utilization node1 cpu=2 memory=2048 pcs node utilization node2 cpu=4 memory=2048", "pcs resource utilization dummy-small cpu=1 memory=1024 pcs resource utilization dummy-medium cpu=2 memory=2048 pcs resource utilization dummy-large cpu=3 memory=3072", "pcs property set placement-strategy=balanced", "virsh dumpxml guest1 > /etc/pacemaker/guest1.xml", "pcs resource create VM VirtualDomain config=/etc/pacemaker/guest1.xml migration_transport=ssh meta allow-migrate=true", "pcs quorum update [auto_tie_breaker=[0|1]] [last_man_standing=[0|1]] [last_man_standing_window=[ time-in-ms ] [wait_for_all=[0|1]]", "pcs quorum update wait_for_all=1 Checking corosync is not running on nodes Error: node1: corosync is running Error: node2: corosync is running pcs cluster stop --all node2: Stopping Cluster (pacemaker) node1: Stopping Cluster (pacemaker) node1: Stopping Cluster (corosync) node2: Stopping Cluster (corosync) pcs quorum update wait_for_all=1 Checking corosync is not running on nodes node2: corosync is not running node1: corosync is not running Sending updated corosync.conf to nodes node1: Succeeded node2: Succeeded pcs quorum config Options: wait_for_all: 1", "pcs quorum [config]", "pcs quorum status", "pcs quorum expected-votes votes", "pcs quorum unblock", "yum install corosync-qdevice yum install corosync-qdevice", "yum install pcs corosync-qnetd", "systemctl start pcsd.service systemctl enable pcsd.service", "pcs qdevice setup model net --enable --start Quorum device 'net' initialized quorum device enabled Starting quorum device quorum device started", "pcs qdevice status net --full QNetd address: *:5403 TLS: Supported (client certificate required) Connected clients: 0 Connected clusters: 0 Maximum send/receive size: 32768/32768 bytes", "firewall-cmd --permanent --add-service=high-availability firewall-cmd --add-service=high-availability", "pcs host auth qdevice Username: hacluster Password: qdevice: Authorized", "pcs quorum config Options:", "pcs quorum status Quorum information ------------------ Date: Wed Jun 29 13:15:36 2016 Quorum provider: corosync_votequorum Nodes: 2 Node ID: 1 Ring ID: 1/8272 Quorate: Yes Votequorum information ---------------------- Expected votes: 2 Highest expected: 2 Total votes: 2 Quorum: 1 Flags: 2Node Quorate Membership information ---------------------- Nodeid Votes Qdevice Name 1 1 NR node1 (local) 2 1 NR node2", "pcs quorum device add model net host=qdevice algorithm=ffsplit Setting up qdevice certificates on nodes node2: Succeeded node1: Succeeded Enabling corosync-qdevice node1: corosync-qdevice enabled node2: corosync-qdevice enabled Sending updated corosync.conf to nodes node1: Succeeded node2: Succeeded Corosync configuration reloaded Starting corosync-qdevice node1: corosync-qdevice started node2: corosync-qdevice started", "pcs quorum config Options: Device: Model: net algorithm: ffsplit host: qdevice", "pcs quorum status Quorum information ------------------ Date: Wed Jun 29 13:17:02 2016 Quorum provider: corosync_votequorum Nodes: 2 Node ID: 1 Ring ID: 1/8272 Quorate: Yes Votequorum information ---------------------- Expected votes: 3 Highest expected: 3 Total votes: 3 Quorum: 2 Flags: Quorate Qdevice Membership information ---------------------- Nodeid Votes Qdevice Name 1 1 A,V,NMW node1 (local) 2 1 A,V,NMW node2 0 1 Qdevice", "pcs quorum device status Qdevice information ------------------- Model: Net Node ID: 1 Configured node list: 0 Node ID = 1 1 Node ID = 2 Membership node list: 1, 2 Qdevice-net information ---------------------- Cluster name: mycluster QNetd host: qdevice:5403 Algorithm: ffsplit Tie-breaker: Node with lowest node ID State: Connected", "pcs qdevice status net --full QNetd address: *:5403 TLS: Supported (client certificate required) Connected clients: 2 Connected clusters: 1 Maximum send/receive size: 32768/32768 bytes Cluster \"mycluster\": Algorithm: ffsplit Tie-breaker: Node with lowest node ID Node ID 2: Client address: ::ffff:192.168.122.122:50028 HB interval: 8000ms Configured node list: 1, 2 Ring ID: 1.2050 Membership node list: 1, 2 TLS active: Yes (client certificate verified) Vote: ACK (ACK) Node ID 1: Client address: ::ffff:192.168.122.121:48786 HB interval: 8000ms Configured node list: 1, 2 Ring ID: 1.2050 Membership node list: 1, 2 TLS active: Yes (client certificate verified) Vote: ACK (ACK)", "pcs qdevice start net pcs qdevice stop net pcs qdevice enable net pcs qdevice disable net pcs qdevice kill net", "pcs quorum device update model algorithm=lms Sending updated corosync.conf to nodes node1: Succeeded node2: Succeeded Corosync configuration reloaded Reloading qdevice configuration on nodes node1: corosync-qdevice stopped node2: corosync-qdevice stopped node1: corosync-qdevice started node2: corosync-qdevice started", "pcs quorum device remove Sending updated corosync.conf to nodes node1: Succeeded node2: Succeeded Corosync configuration reloaded Disabling corosync-qdevice node1: corosync-qdevice disabled node2: corosync-qdevice disabled Stopping corosync-qdevice node1: corosync-qdevice stopped node2: corosync-qdevice stopped Removing qdevice certificates from nodes node1: Succeeded node2: Succeeded", "pcs quorum device status Error: Unable to get quorum status: corosync-qdevice-tool: Can't connect to QDevice socket (is QDevice running?): No such file or directory", "pcs qdevice destroy net Stopping quorum device quorum device stopped quorum device disabled Quorum device 'net' configuration files removed", "install --mode=0755 /usr/share/pacemaker/alerts/alert_file.sh.sample /var/lib/pacemaker/alert_file.sh", "touch /var/log/pcmk_alert_file.log chown hacluster:haclient /var/log/pcmk_alert_file.log chmod 600 /var/log/pcmk_alert_file.log pcs alert create id=alert_file description=\"Log events to a file.\" path=/var/lib/pacemaker/alert_file.sh pcs alert recipient add alert_file id=my-alert_logfile value=/var/log/pcmk_alert_file.log", "install --mode=0755 /usr/share/pacemaker/alerts/alert_snmp.sh.sample /var/lib/pacemaker/alert_snmp.sh pcs alert create id=snmp_alert path=/var/lib/pacemaker/alert_snmp.sh meta timestamp-format=\"%Y-%m-%d,%H:%M:%S.%01N\" pcs alert recipient add snmp_alert value=192.168.1.2 pcs alert Alerts: Alert: snmp_alert (path=/var/lib/pacemaker/alert_snmp.sh) Meta options: timestamp-format=%Y-%m-%d,%H:%M:%S.%01N. Recipients: Recipient: snmp_alert-recipient (value=192.168.1.2)", "install --mode=0755 /usr/share/pacemaker/alerts/alert_smtp.sh.sample /var/lib/pacemaker/alert_smtp.sh pcs alert create id=smtp_alert path=/var/lib/pacemaker/alert_smtp.sh options [email protected] pcs alert recipient add smtp_alert [email protected] pcs alert Alerts: Alert: smtp_alert (path=/var/lib/pacemaker/alert_smtp.sh) Options: [email protected] Recipients: Recipient: smtp_alert-recipient ([email protected])", "pcs alert create path= path [id= alert-id ] [description= description ] [options [ option = value ]...] [meta [ meta-option = value ]...]", "pcs alert create id=my_alert path=/path/to/myscript.sh", "pcs alert [config|show]", "pcs alert update alert-id [path= path ] [description= description ] [options [ option = value ]...] [meta [ meta-option = value ]...]", "pcs alert remove alert-id", "pcs alert recipient add alert-id value= recipient-value [id= recipient-id ] [description= description ] [options [ option = value ]...] [meta [ meta-option = value ]...]", "pcs alert recipient update recipient-id [value= recipient-value ] [description= description ] [options [ option = value ]...] [meta [ meta-option = value ]...]", "pcs alert recipient remove recipient-id", "pcs alert recipient add my-alert value=my-alert-recipient id=my-recipient-id options value=some-address", "pcs alert create id=my-alert path=/path/to/myscript.sh meta timeout=15s pcs alert recipient add my-alert [email protected] id=my-alert-recipient1 meta timestamp-format=\"%D %H:%M\" pcs alert recipient add my-alert [email protected] id=my-alert-recipient2 meta timestamp-format=\"%c\"", "pcs alert create path=/my/path pcs alert recipient add alert value=rec_value pcs alert recipient add alert value=rec_value2 id=my-recipient pcs alert config Alerts: Alert: alert (path=/my/path) Recipients: Recipient: alert-recipient (value=rec_value) Recipient: my-recipient (value=rec_value2)", "pcs alert create id=my-alert path=/path/to/script description=alert_description options option1=value1 opt=val meta timeout=50s timestamp-format=\"%H%B%S\" pcs alert recipient add my-alert value=my-other-recipient pcs alert Alerts: Alert: alert (path=/my/path) Recipients: Recipient: alert-recipient (value=rec_value) Recipient: my-recipient (value=rec_value2) Alert: my-alert (path=/path/to/script) Description: alert_description Options: opt=val option1=value1 Meta options: timestamp-format=%H%B%S timeout=50s Recipients: Recipient: my-alert-recipient (value=my-other-recipient)", "pcs alert update my-alert options option1=newvalue1 meta timestamp-format=\"%H%M%S\" pcs alert recipient update my-alert-recipient options option1=new meta timeout=60s pcs alert Alerts: Alert: alert (path=/my/path) Recipients: Recipient: alert-recipient (value=rec_value) Recipient: my-recipient (value=rec_value2) Alert: my-alert (path=/path/to/script) Description: alert_description Options: opt=val option1=newvalue1 Meta options: timestamp-format=%H%M%S timeout=50s Recipients: Recipient: my-alert-recipient (value=my-other-recipient) Options: option1=new Meta options: timeout=60s", "pcs alert recipient remove my-recipient pcs alert Alerts: Alert: alert (path=/my/path) Recipients: Recipient: alert-recipient (value=rec_value) Alert: my-alert (path=/path/to/script) Description: alert_description Options: opt=val option1=newvalue1 Meta options: timestamp-format=\"%M%B%S\" timeout=50s Recipients: Recipient: my-alert-recipient (value=my-other-recipient) Options: option1=new Meta options: timeout=60s", "pcs alert remove myalert pcs alert Alerts: Alert: alert (path=/my/path) Recipients: Recipient: alert-recipient (value=rec_value)", "yum install -y booth-site yum install -y booth-site yum install -y booth-site yum install -y booth-site", "yum install -y pcs booth-core booth-arbitrator", "firewall-cmd --permanent --add-service=high-availability firewall-cmd --add-service=high-availability", "pcs booth setup sites 192.168.11.100 192.168.22.100 arbitrators 192.168.99.100", "pcs booth ticket add apacheticket", "pcs booth sync", "pcs host auth cluster1-node1 pcs booth pull cluster1-node1", "pcs host auth cluster1-node1 pcs booth pull cluster1-node1 pcs booth sync", "pcs booth start pcs booth enable", "pcs booth create ip 192.168.11.100 pcs booth create ip 192.168.22.100", "pcs constraint ticket add apacheticket apachegroup pcs constraint ticket add apacheticket apachegroup", "pcs constraint ticket [show]", "pcs booth ticket grant apacheticket", "yum install pacemaker-remote resource-agents pcs systemctl start pcsd.service systemctl enable pcsd.service firewall-cmd --add-port 3121/tcp --permanent firewall-cmd --add-port 2224/tcp --permanent firewall-cmd --reload", "pcs host auth nodename", "pcs cluster node add-guest nodename resource_id [ options ]", "pcs resource create webserver apache configfile=/etc/httpd/conf/httpd.conf op monitor interval=30s pcs constraint location webserver prefers nodename", "firewall-cmd --permanent --add-service=high-availability success firewall-cmd --reload success", "yum install -y pacemaker-remote resource-agents pcs", "systemctl start pcsd.service systemctl enable pcsd.service", "pcs host auth remote1", "pcs cluster node add-remote remote1", "pcs resource create webserver apache configfile=/etc/httpd/conf/httpd.conf op monitor interval=30s pcs constraint location webserver prefers remote1", "\\#==#==# Pacemaker Remote # Specify a custom port for Pacemaker Remote connections PCMK_remote_port=3121", "pcs resource disable resourcename", "pcs resource enable resourcename", "pcs node standby node | --all", "pcs node unstandby node | --all", "pcs resource move resource_id [ destination_node ] [--master] [lifetime= lifetime ]", "pcs resource move resource1 example-node2 lifetime=PT1H30M", "pcs resource move resource1 example-node2 lifetime=PT30M", "pcs resource relocate run [ resource1 ] [ resource2 ]", "pcs resource disable resource_id [--wait[= n ]]", "pcs resource enable resource_id [--wait[= n ]]", "pcs resource ban resource_id [ node ] [--master] [lifetime= lifetime ] [--wait[= n ]]", "pcs resource debug-start resource_id", "pcs resource unmanage resource1 [ resource2 ]", "pcs resource manage resource1 [ resource2 ]", "pcs property set maintenance-mode=true", "pcs property set maintenance-mode=false", "pcs property unset property", "pcs property set symmetric-cluster=", "pcs resource disable resourcename", "pcs resource enable resourcename", "pcs cluster stop", "pcs cluster start", "pcs cluster config uuid generate", "pcs cluster config uuid generate --force", "pcs host auth z1.example.com z2.example.com z3.example.com z4.example.com -u hacluster -p password z1.example.com: Authorized z2.example.com: Authorized z3.example.com: Authorized z4.example.com: Authorized", "pcs cluster setup PrimarySite z1.example.com z2.example.com --start {...} Cluster has been successfully set up. Starting cluster on hosts: 'z1.example.com', 'z2.example.com'", "pcs cluster setup DRSite z3.example.com z4.example.com --start {...} Cluster has been successfully set up. Starting cluster on hosts: 'z3.example.com', 'z4.example.com'", "pcs dr set-recovery-site z3.example.com Sending 'disaster-recovery config' to 'z3.example.com', 'z4.example.com' z3.example.com: successful distribution of the file 'disaster-recovery config' z4.example.com: successful distribution of the file 'disaster-recovery config' Sending 'disaster-recovery config' to 'z1.example.com', 'z2.example.com' z1.example.com: successful distribution of the file 'disaster-recovery config' z2.example.com: successful distribution of the file 'disaster-recovery config'", "pcs dr config Local site: Role: Primary Remote site: Role: Recovery Nodes: z3.example.com z4.example.com", "pcs dr status --- Local cluster - Primary site --- Cluster name: PrimarySite WARNINGS: No stonith devices and stonith-enabled is not false Cluster Summary: * Stack: corosync * Current DC: z2.example.com (version 2.0.3-2.el8-2c9cea563e) - partition with quorum * Last updated: Mon Dec 9 04:10:31 2019 * Last change: Mon Dec 9 04:06:10 2019 by hacluster via crmd on z2.example.com * 2 nodes configured * 0 resource instances configured Node List: * Online: [ z1.example.com z2.example.com ] Full List of Resources: * No resources Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled --- Remote cluster - Recovery site --- Cluster name: DRSite WARNINGS: No stonith devices and stonith-enabled is not false Cluster Summary: * Stack: corosync * Current DC: z4.example.com (version 2.0.3-2.el8-2c9cea563e) - partition with quorum * Last updated: Mon Dec 9 04:10:34 2019 * Last change: Mon Dec 9 04:09:55 2019 by hacluster via crmd on z4.example.com * 2 nodes configured * 0 resource instances configured Node List: * Online: [ z3.example.com z4.example.com ] Full List of Resources: * No resources Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html-single/configuring_and_managing_high_availability_clusters/index
Chapter 2. Tooling Guide extension pack
Chapter 2. Tooling Guide extension pack Important The VS Code extensions for Apache Camel are listed as development support. For more information about scope of development support, see Development Support Scope of Coverage for Red Hat Build of Apache Camel . 2.1. Installing extension pack for Apache Camel by Red Hat This section explains how to install Extension Pack for Apache Camel by Red Hat. Procedure Open the VS Code editor. In the VS Code editor, select View > Extensions . In the search bar, type Camel . Select the Extension Pack for Apache Camel by Red Hat option from the search results and then click Install. This installs the extension pack which includes extensions for Apache Camel in the VS Code editor.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/tooling_guide_for_red_hat_build_of_apache_camel/camel-tooling-guide-extension-pack
Chapter 60. Kamelet
Chapter 60. Kamelet Both producer and consumer are supported The Kamelet Component provides support for interacting with the Camel Route Template engine using Endpoint semantic. 60.1. Dependencies When using kamelet with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kamelet-starter</artifactId> </dependency> 60.2. URI format kamelet:templateId/routeId[?options] 60.3. Configuring Options Camel components are configured on two levels: Component level Endpoint level 60.3.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 60.3.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 60.4. Component Options The Kamelet component supports 9 options, which are listed below. Name Description Default Type location (common) The location(s) of the Kamelets on the file system. Multiple locations can be set separated by comma. classpath:/kamelets String routeProperties (common) Set route local parameters. Map templateProperties (common) Set template local parameters. Map bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean block (producer) If sending a message to a kamelet endpoint which has no active consumer, then we can tell the producer to block and wait for the consumer to become active. true boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean timeout (producer) The timeout value to use if block is enabled. 30000 long autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean routeTemplateLoaderListener (advanced) Autowired To plugin a custom listener for when the Kamelet component is loading Kamelets from external resources. RouteTemplateLoaderListener 60.5. Endpoint Options The Kamelet endpoint is configured using URI syntax: with the following path and query parameters: 60.5.1. Path Parameters (2 parameters) Name Description Default Type templateId (common) Required The Route Template ID. String routeId (common) The Route ID. Default value notice: The ID will be auto-generated if not provided. String 60.5.2. Query Parameters (8 parameters) Name Description Default Type location (common) Location of the Kamelet to use which can be specified as a resource from file system, classpath etc. The location cannot use wildcards, and must refer to a file including extension, for example file:/etc/foo-kamelet.xml. String bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern block (producer) If sending a message to a direct endpoint which has no active consumer, then we can tell the producer to block and wait for the consumer to become active. true boolean failIfNoConsumers (producer) Whether the producer should fail by throwing an exception, when sending to a kamelet endpoint with no active consumers. true boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean timeout (producer) The timeout value to use if block is enabled. 30000 long Note The kamelet endpoint is lenient , which means that the endpoint accepts additional parameters that are passed to the engine and consumed upon route materialization. 60.6. Discovery If a Route Template is not found, the kamelet endpoint tries to load the related kamelet definition from the file system (by default classpath:/kamelets ). The default resolution mechanism expect kamelet files to have the extension .kamelet.yaml . 60.7. Samples Kamelets can be used as if they were standard Camel components. For example, suppose that we have created a Route Template as follows: routeTemplate("setMyBody") .templateParameter("bodyValue") .from("kamelet:source") .setBody().constant("{{bodyValue}}"); Note To let the Kamelet component wiring the materialized route to the caller processor, we need to be able to identify the input and output endpoint of the route and this is done by using kamele:source to mark the input endpoint and kamelet:sink for the output endpoint. Then the template can be instantiated and invoked as shown below: from("direct:setMyBody") .to("kamelet:setMyBody?bodyValue=myKamelet"); Behind the scenes, the Kamelet component does the following things: It instantiates a route out of the Route Template identified by the given templateId path parameter (in this case setBody ) It will act like the direct component and connect the current route to the materialized one. If you had to do it programmatically, it would have been something like: routeTemplate("setMyBody") .templateParameter("bodyValue") .from("direct:{{foo}}") .setBody().constant("{{bodyValue}}"); TemplatedRouteBuilder.builder(context, "setMyBody") .parameter("foo", "bar") .parameter("bodyValue", "myKamelet") .add(); from("direct:template") .to("direct:bar"); 60.8. Spring Boot Auto-Configuration The component supports 10 options, which are listed below. Name Description Default Type camel.component.kamelet.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kamelet.block If sending a message to a kamelet endpoint which has no active consumer, then we can tell the producer to block and wait for the consumer to become active. true Boolean camel.component.kamelet.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kamelet.enabled Whether to enable auto configuration of the kamelet component. This is enabled by default. Boolean camel.component.kamelet.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kamelet.location The location(s) of the Kamelets on the file system. Multiple locations can be set separated by comma. classpath:/kamelets String camel.component.kamelet.route-properties Set route local parameters. Map camel.component.kamelet.route-template-loader-listener To plugin a custom listener for when the Kamelet component is loading Kamelets from external resources. The option is a org.apache.camel.spi.RouteTemplateLoaderListener type. RouteTemplateLoaderListener camel.component.kamelet.template-properties Set template local parameters. Map camel.component.kamelet.timeout The timeout value to use if block is enabled. 30000 Long
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kamelet-starter</artifactId> </dependency>", "kamelet:templateId/routeId[?options]", "kamelet:templateId/routeId", "routeTemplate(\"setMyBody\") .templateParameter(\"bodyValue\") .from(\"kamelet:source\") .setBody().constant(\"{{bodyValue}}\");", "from(\"direct:setMyBody\") .to(\"kamelet:setMyBody?bodyValue=myKamelet\");", "routeTemplate(\"setMyBody\") .templateParameter(\"bodyValue\") .from(\"direct:{{foo}}\") .setBody().constant(\"{{bodyValue}}\"); TemplatedRouteBuilder.builder(context, \"setMyBody\") .parameter(\"foo\", \"bar\") .parameter(\"bodyValue\", \"myKamelet\") .add(); from(\"direct:template\") .to(\"direct:bar\");" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-kamelet-component-starter
Chapter 12. Creating a kernel-based virtual machine and booting the installation ISO in the VM
Chapter 12. Creating a kernel-based virtual machine and booting the installation ISO in the VM You can create a kernel-based virtual machine (KVM) and start the Red Hat Enterprise Linux installation. The following instructions are specific for installation on a VM. If you are installing RHEL on a physical system, you can skip this section. Procedure Create a virtual machine with the instance of Red Hat Enterprise Linux as a KVM guest operating system, by using the following virt-install command on the KVM host: Additional resources virt-install man page on your system Creating virtual machines by using the command line
[ "virt-install --name=<guest_name> --disk size=<disksize_in_GB> --memory=<memory_size_in_MB> --cdrom <filepath_to_iso> --graphics vnc" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/interactively_installing_rhel_over_the_network/installing-under-kvm_rhel-installer
Chapter 16. Managing power consumption with PowerTOP
Chapter 16. Managing power consumption with PowerTOP As a system administrator, you can use the PowerTOP tool to analyze and manage power consumption. 16.1. The purpose of PowerTOP PowerTOP is a program that diagnoses issues related to power consumption and provides suggestions on how to extend battery lifetime. The PowerTOP tool can provide an estimate of the total power usage of the system and also individual power usage for each process, device, kernel worker, timer, and interrupt handler. The tool can also identify specific components of kernel and user-space applications that frequently wake up the CPU. Red Hat Enterprise Linux 8 uses version 2.x of PowerTOP . 16.2. Using PowerTOP Prerequisites To be able to use PowerTOP , make sure that the powertop package has been installed on your system: 16.2.1. Starting PowerTOP Procedure To run PowerTOP , use the following command: Important Laptops should run on battery power when running the powertop command. 16.2.2. Calibrating PowerTOP Procedure On a laptop, you can calibrate the power estimation engine by running the following command: Let the calibration finish without interacting with the machine during the process. Calibration takes time because the process performs various tests, cycles through brightness levels and switches devices on and off. When the calibration process is completed, PowerTOP starts as normal. Let it run for approximately an hour to collect data. When enough data is collected, power estimation figures will be displayed in the first column of the output table. Note Note that powertop --calibrate can only be used on laptops. 16.2.3. Setting the measuring interval By default, PowerTOP takes measurements in 20 seconds intervals. If you want to change this measuring frequency, use the following procedure: Procedure Run the powertop command with the --time option: 16.2.4. Additional resources For more details on how to use PowerTOP , see the powertop man page on your system 16.3. PowerTOP statistics While it runs, PowerTOP gathers statistics from the system. PowerTOP 's output provides multiple tabs: Overview Idle stats Frequency stats Device stats Tunables WakeUp You can use the Tab and Shift+Tab keys to cycle through these tabs. 16.3.1. The Overview tab In the Overview tab, you can view a list of the components that either send wakeups to the CPU most frequently or consume the most power. The items within the Overview tab, including processes, interrupts, devices, and other resources, are sorted according to their utilization. The adjacent columns within the Overview tab provide the following pieces of information: Usage Power estimation of how the resource is being used. Events/s Wakeups per second. The number of wakeups per second indicates how efficiently the services or the devices and drivers of the kernel are performing. Less wakeups means that less power is consumed. Components are ordered by how much further their power usage can be optimized. Category Classification of the component; such as process, device, or timer. Description Description of the component. If properly calibrated, a power consumption estimation for every listed item in the first column is shown as well. Apart from this, the Overview tab includes the line with summary statistics such as: Total power consumption Remaining battery life (only if applicable) Summary of total wakeups per second, GPU operations per second, and virtual file system operations per second 16.3.2. The Idle stats tab The Idle stats tab shows usage of C-states for all processors and cores, while the Frequency stats tab shows usage of P-states including the Turbo mode, if applicable, for all processors and cores. The duration of C- or P-states is an indication of how well the CPU usage has been optimized. The longer the CPU stays in the higher C- or P-states (for example C4 is higher than C3), the better the CPU usage optimization is. Ideally, residency is 90% or more in the highest C- or P-state when the system is idle. 16.3.3. The Device stats tab The Device stats tab provides similar information to the Overview tab but only for devices. 16.3.4. The Tunables tab The Tunables tab contains PowerTOP 's suggestions for optimizing the system for lower power consumption. Use the up and down keys to move through suggestions, and the enter key to toggle the suggestion on or off. 16.3.5. The WakeUp tab The WakeUp tab displays the device wakeup settings available for users to change as and when required. Use the up and down keys to move through the available settings, and the enter key to enable or disable a setting. Figure 16.1. PowerTOP output Additional resources For more details on PowerTOP , see PowerTOP's home page . 16.4. Why Powertop does not display Frequency stats values in some instances While using the Intel P-State driver, PowerTOP only displays values in the Frequency Stats tab if the driver is in passive mode. But, even in this case, the values may be incomplete. In total, there are three possible modes of the Intel P-State driver: Active mode with Hardware P-States (HWP) Active mode without HWP Passive mode Switching to the ACPI CPUfreq driver results in complete information being displayed by PowerTOP. However, it is recommended to keep your system on the default settings. To see what driver is loaded and in what mode, run: intel_pstate is returned if the Intel P-State driver is loaded and in active mode. intel_cpufreq is returned if the Intel P-State driver is loaded and in passive mode. acpi-cpufreq is returned if the ACPI CPUfreq driver is loaded. While using the Intel P-State driver, add the following argument to the kernel boot command line to force the driver to run in passive mode: To disable the Intel P-State driver and use, instead, the ACPI CPUfreq driver, add the following argument to the kernel boot command line: 16.5. Generating an HTML output Apart from the powertop's output in terminal, you can also generate an HTML report. Procedure Run the powertop command with the --html option: Replace the htmlfile.html parameter with the required name for the output file. 16.6. Optimizing power consumption To optimize power consumption, you can use either the powertop service or the powertop2tuned utility. 16.6.1. Optimizing power consumption using the powertop service You can use the powertop service to automatically enable all PowerTOP 's suggestions from the Tunables tab on the boot: Procedure Enable the powertop service: 16.6.2. The powertop2tuned utility The powertop2tuned utility allows you to create custom TuneD profiles from PowerTOP suggestions. By default, powertop2tuned creates profiles in the /etc/tuned/ directory, and bases the custom profile on the currently selected TuneD profile. For safety reasons, all PowerTOP tunings are initially disabled in the new profile. To enable the tunings, you can: Uncomment them in the /etc/tuned/profile_name/tuned.conf file . Use the --enable or -e option to generate a new profile that enables most of the tunings suggested by PowerTOP . Certain potentially problematic tunings, such as the USB autosuspend, are disabled by default and need to be uncommented manually. 16.6.3. Optimizing power consumption using the powertop2tuned utility Prerequisites The powertop2tuned utility is installed on the system: Procedure Create a custom profile: Activate the new profile: Additional information For a complete list of options that powertop2tuned supports, use: 16.6.4. Comparison of powertop.service and powertop2tuned Optimizing power consumption with powertop2tuned is preferred over powertop.service for the following reasons: The powertop2tuned utility represents integration of PowerTOP into TuneD , which enables to benefit of advantages of both tools. The powertop2tuned utility allows for fine-grained control of enabled tuning. With powertop2tuned , potentially dangerous tuning are not automatically enabled. With powertop2tuned , rollback is possible without reboot.
[ "yum install powertop", "powertop", "powertop --calibrate", "powertop --time= time in seconds", "cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_driver", "intel_pstate=passive", "intel_pstate=disable", "powertop --html=htmlfile.html", "systemctl enable powertop", "yum install tuned-utils", "powertop2tuned new_profile_name", "tuned-adm profile new_profile_name", "powertop2tuned --help" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/monitoring_and_managing_system_status_and_performance/managing-power-consumption-with-powertop_monitoring-and-managing-system-status-and-performance
Chapter 7. Security
Chapter 7. Security 7.1. Securing connections with SSL/TLS AMQ C++ uses SSL/TLS to encrypt communication between clients and servers. To connect to a remote server with SSL/TLS, set the ssl_client_options connection option and use a connection URL with the amqps scheme. The ssl_client_options constructor takes the filename, directory, or database ID of a CA certificate. Example: Enabling SSL/TLS proton::ssl_client_options sopts {"/etc/pki/ca-trust"}; proton::connection_options opts {}; opts.ssl_client_options(sopts); container.connect(" amqps ://example.com", opts); 7.2. Connecting with a user and password AMQ C++ can authenticate connections with a user and password. To specify the credentials used for authentication, set the user and password options on the connect method. Example: Connecting with a user and password proton::connection_options opts {}; opts.user("alice"); opts.password("secret"); container.connect("amqps://example.com", opts); 7.3. Configuring SASL authentication AMQ C++ uses the SASL protocol to perform authentication. SASL can use a number of different authentication mechanisms . When two network peers connect, they exchange their allowed mechanisms, and the strongest mechanism allowed by both is selected. Note The client uses Cyrus SASL to perform authentication. Cyrus SASL uses plug-ins to support specific SASL mechanisms. Before you can use a particular SASL mechanism, the relevant plug-in must be installed. For example, you need the cyrus-sasl-plain plug-in in order to use SASL PLAIN authentication. To see a list of Cyrus SASL plug-ins in Red Hat Enterprise Linux, use the yum search cyrus-sasl command. To install a Cyrus SASL plug-in, use the yum install PLUG-IN command. By default, AMQ C++ allows all of the mechanisms supported by the local SASL library configuration. To restrict the allowed mechanisms and thereby control what mechanisms can be negotiated, use the sasl_allowed_mechs connection option. This option accepts a string containing a space-separated list of mechanism names. Example: Configuring SASL authentication proton::connection_options opts {}; opts.sasl_allowed_mechs("ANONYMOUS") ; container.connect("amqps://example.com", opts); This example forces the connection to authenticate using the ANONYMOUS mechanism even if the server we connect to offers other options. Valid mechanisms include ANONYMOUS , PLAIN , SCRAM-SHA-256 , SCRAM-SHA-1 , GSSAPI , and EXTERNAL . AMQ C++ enables SASL by default. To disable it, set the sasl_enabled connection option to false. Example: Disabling SASL proton::connection_options opts {}; opts.sasl_enabled(false); container.connect("amqps://example.com", opts); 7.4. Authenticating using Kerberos Kerberos is a network protocol for centrally managed authentication based on the exchange of encrypted tickets. See Using Kerberos for more information. Configure Kerberos in your operating system. See Configuring Kerberos to set up Kerberos on Red Hat Enterprise Linux. Enable the GSSAPI SASL mechanism in your client application. proton::connection_options opts {}; opts.sasl_allowed_mechs("GSSAPI") ; container.connect("amqps://example.com", opts); Use the kinit command to authenticate your user credentials and store the resulting Kerberos ticket. USD kinit USER @ REALM Run the client program.
[ "proton::ssl_client_options sopts {\"/etc/pki/ca-trust\"}; proton::connection_options opts {}; opts.ssl_client_options(sopts); container.connect(\" amqps ://example.com\", opts);", "proton::connection_options opts {}; opts.user(\"alice\"); opts.password(\"secret\"); container.connect(\"amqps://example.com\", opts);", "proton::connection_options opts {}; opts.sasl_allowed_mechs(\"ANONYMOUS\") ; container.connect(\"amqps://example.com\", opts);", "proton::connection_options opts {}; opts.sasl_enabled(false); container.connect(\"amqps://example.com\", opts);", "proton::connection_options opts {}; opts.sasl_allowed_mechs(\"GSSAPI\") ; container.connect(\"amqps://example.com\", opts);", "kinit USER @ REALM" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_the_amq_cpp_client/security
Chapter 12. Next steps
Chapter 12. steps To start deploying your OpenShift Data Foundation, you can use the internal mode within OpenShift Container Platform or use external mode to make available services from a cluster running outside of OpenShift Container Platform. Depending on your requirement, go to the respective deployment guides. Internal mode Deploying OpenShift Data Foundation using Amazon web services Deploying OpenShift Data Foundation using Bare Metal Deploying OpenShift Data Foundation using VMWare vSphere Deploying OpenShift Data Foundation using Microsoft Azure Deploying OpenShift Data Foundation using Google Cloud Deploying OpenShift Data Foundation using Red Hat OpenStack Platform [Technology Preview] Deploying OpenShift Data Foundation on IBM Power Deploying OpenShift Data Foundation on IBM Z Deploying OpenShift Data Foundation on any platform External mode Deploying OpenShift Data Foundation in external mode Internal or external For deploying multiple clusters, see Deploying multiple OpenShift Data Foundation clusters .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/planning_your_deployment/next_steps
7.147. openssl
7.147. openssl 7.147.1. RHBA-2015:1398 - openssl bug fix and enhancement update Updated openssl packages that fix two bugs and add one enhancement are now available for Red Hat Enterprise Linux 6. OpenSSL is a toolkit that implements the Secure Sockets Layer (SSL) and Transport Layer Security (TLS) protocols, as well as a full-strength general-purpose cryptography library. Bug Fixes BZ# 1119191 Previously, the ciphers(1) manual page did not describe the following Elliptic Curve Cryptography (ECC) cipher suite groups: Elliptic Curve Diffie-Hellman (ECDH) and Elliptic Curve Digital Signature Algorithm (ECDSA), or TLS version 1.2 (TLSv1.2) specific features. This update adds the missing description of the ECDH and ECDSA cipher groups and TLSv1.2 features to ciphers(1), and the documentation is now complete. BZ# 1234487 The server-side renegotiation support did previously not work as expected under certain circumstances. A PostgreSQL failure of database dumps through TLS connection could occur when the size of the dumped data was larger than the value defined in the ssl_renegotiation_limit setting. The regression that caused this bug has been fixed, and the PostgreSQL database dumps through TLS connection no longer fail in the described situation. Enhancement BZ# 961965 This update adds the "-keytab" option to the "openssl s_server" command and the "-krb5svc" option to the "openssl s_server" and "openssl s_client" commands. The "-keytab" option allows the user to specify a custom keytab location; if the user does not add "-keytab", the openssl utility assumes the default keytab location. The "-krb5svc" option enables selecting a service other than the "host" service; this allows unprivileged users without keys to the host principal to use "openssl s_server" and "open s_client" with Kerberos. Users of openssl are advised to upgrade to these updated packages, which fix these bugs and add this enhancement. For the update to take effect, all services linked to the OpenSSL library must be restarted, or the system rebooted.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-openssl
Chapter 12. Configuring manual node reboot to define KernelArgs
Chapter 12. Configuring manual node reboot to define KernelArgs Overcloud nodes are automatically rebooted when the overcloud deployment includes setting the KernelArgs for the first time. Rebooting nodes can be an issue for existing workloads if you are adding KernelArgs to a deployment that is already in production. You can disable the automatic rebooting of nodes when updating a deployment, and instead perform node reboots manually after each overcloud deployment. Note If you disable automatic reboot and then add new Compute nodes to your deployment, the new nodes will not be rebooted during their initial provisioning. This might cause deployment errors because the configuration of KernelArgs is applied only after a reboot. 12.1. Configuring manual node reboot to define KernelArgs You can disable the automatic rebooting of nodes when you configure KernelArgs for the first time, and instead reboot the nodes manually. Procedure Log in to the undercloud as the stack user. Source the stackrc file: Enable the KernelArgsDeferReboot role parameter in a custom environment file, for example, kernelargs_manual_reboot.yaml : Add your custom environment file to the stack with your other environment files and deploy the overcloud: Retrieve a list of your Compute nodes to identify the host name of the node that you want to reboot: Disable the Compute service on the Compute node you want to reboot, to prevent the Compute scheduler from assigning new instances to the node: Replace <node> with the host name of the node you want to disable the Compute service on. Retrieve a list of the instances hosted on the Compute node that you want to migrate: Migrate the instances to another Compute node. For information on migrating instances, see Migrating virtual machine instances between Compute nodes . Log in to the node that you want to reboot. Reboot the node: Wait until the node boots. Re-enable the Compute node: Check that the Compute node is enabled:
[ "[stack@director ~]USD source ~/stackrc", "parameter_defaults: <Role>Parameters: KernelArgsDeferReboot: True", "(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/kernelargs_manual_reboot.yaml", "(undercloud)USD source ~/overcloudrc (overcloud)USD openstack compute service list", "(overcloud)USD openstack compute service set <node> nova-compute --disable", "(overcloud)USD openstack server list --host <node_UUID> --all-projects", "[tripleo-admin@overcloud-compute-0 ~]USD sudo reboot", "(overcloud)USD openstack compute service set <node_UUID> nova-compute --enable", "(overcloud)USD openstack compute service list" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/configuring_the_compute_service_for_instance_creation/assembly_configuring-manual-node-reboot-to-define-kernelargs_kernelargs-manual-reboot
Chapter 8. Build-time network policy tools
Chapter 8. Build-time network policy tools Build-time network policy tools let you automate the creation and validation of Kubernetes network policies in your development and operations workflows using the roxctl CLI. These tools work with a specified file directory containing your project's workload and network policy manifests and do not require RHACS authentication. Table 8.1. Network policy tools Command Description roxctl netpol generate Generates Kubernetes network policies by analyzing your project's YAML manifests in a specified directory. For more information, see Using the build-time network policy generator . roxctl netpol connectivity map Lists the allowed connections between workloads in your project directory by examining the workload and Kubernetes network policy manifests. You can generate the output in various text formats or in a graphical .dot format. For more information, see Connectivity mapping using the roxctl netpol connectivity map command . roxctl netpol connectivity diff Creates a list of variations in the allowed connections between two project versions. This is determined by the workload and Kubernetes network policy manifests in each version's directory. This feature shows the semantic differences which are not obvious when performing a source code (syntactic) diff . For more information, see Identifying the differences in allowed connections between project versions . 8.1. Using the build-time network policy generator The build-time network policy generator can automatically generate Kubernetes network policies based on application YAML manifests. You can use it to develop network policies as part of the continuous integration/continuous deployment (CI/CD) pipeline before deploying applications on your cluster. Red Hat developed this feature in partnership with the developers of the NP-Guard project . First, the build-time network policy generator analyzes Kubernetes manifests in a local folder, including service manifests, config maps, and workload manifests such as Pod , Deployment , ReplicaSet , Job , DaemonSet , and StatefulSet . Then, it discovers the required connectivity and creates the Kubernetes network policies to achieve pod isolation. These policies allow no more and no less than the needed ingress and egress traffic. 8.1.1. Generating build-time network policies The build-time network policy generator is included in the roxctl CLI. For the build-time network policy generation feature, roxctl CLI does not need to communicate with RHACS Central so you can use it in any development environment. Prerequisites The build-time network policy generator recursively scans the directory you specify when you run the command. Therefore, before you run the command, you must already have service manifests, config maps, and workload manifests such as Pod , Deployment , ReplicaSet , Job , DaemonSet , and StatefulSet as YAML files in the specified directory. Verify that you can apply these YAML files as-is using the kubectl apply -f command. The build-time network policy generator does not work with files that use Helm style templating. Verify that the service network addresses are not hard-coded. Every workload that needs to connect to a service must specify the service network address as a variable. You can specify this variable by using the workload's resource environment variable or in a config map. Example 1: using an environment variable Example 2: using a config map Example 3: using a config map Service network addresses must match the following official regular expression pattern: 1 In this pattern, <svc> is the service name. <ns> is the namespace where you defined the service. <portNum> is the exposed service port number. Following are some examples that match the pattern: wordpress-mysql:3306 redis-follower.redis.svc.cluster.local:6379 redis-leader.redis http://rating-service. Procedure Verify that the build-time network policy generation feature is available by running the help command: USD roxctl netpol generate -h Generate the policies by using the netpol generate command: USD roxctl netpol generate <folder_path> [flags] 1 1 Specify the path to the folder, which can include sub-folders that contain YAML resources for analysis. The command scans the entire sub-folder tree. Optionally, you can also specify parameters to modify the behavior of the command. For more information about optional parameters, see roxctl netpol generate command options . steps After generating the policies, you must inspect them for completeness and accuracy, in case any relevant network address was not specified as expected in the YAML files. Most importantly, verify that required connections are not blocked by the isolating policies. To help with this inspection you can use the roxctl netpol connectivity map tool. Note Applying network policies to the cluster as part of the workload deployment using automation saves time and ensures accuracy. You can follow a GitOps approach by submitting the generated policies using pull requests, providing the team an opportunity to review the policies before deploying them as part of the pipeline. 8.1.2. roxctl netpol generate command options The roxctl netpol generate command supports the following options: Option Description -h, --help View the help text for the netpol command. -d, --output-dir <dir> Save the generated policies into a target folder. One file per policy. -f, --output-file <filename> Save and merge the generated policies into a single YAML file. --fail Fail on the first encountered error. The default value is false . --remove Remove the output path if it already exist. --strict Treat warnings as errors. The default value is false . 8.2. Connectivity mapping using the roxctl netpol connectivity map command Connectivity mapping provides details on the allowed connections between different workloads based on network policies defined in Kubernetes manifests. You can visualize and understand how different workloads in your Kubernetes environment are allowed to communicate with each other according to the network policies you set up. To retrieve connectivity mapping information, the roxctl netpol connectivity map command requires a directory path that contains Kubernetes workloads and network policy manifests. The output provides details about connectivity details within the Kubernetes resources analyzed. 8.2.1. Retrieving connectivity mapping information from a Kubernetes manifest directory Procedure Run the following command to retrieve the connectivity mapping information: USD roxctl netpol connectivity map <folder_path> [flags] 1 1 Specify the path to the folder, which can include sub-folders that contain YAML resources and network policies for analysis, for example, netpol-analysis-example-minimal/ . The command scans the entire sub-folder tree. Optionally, you can also specify parameters to modify the behavior of the command. For more information about optional parameters, see roxctl netpol connectivity map command options . Example 8.1. Example output src dst conn 0.0.0.0-255.255.255.255 default/frontend[Deployment] TCP 8080 default/frontend[Deployment] 0.0.0.0-255.255.255.255 UDP 53 default/frontend[Deployment] default/backend[Deployment] TCP 9090 The output shows you a table with a list of allowed connectivity lines. Each connectivity line consists of three parts: source ( src ), destination ( dst ), and allowed connectivity attributes ( conn ). You can interpret src as the source endpoint, dst as the destination endpoint, and conn as the allowable connectivity attributes. An endpoint has the format namespace/name[Kind] , for example, default/backend[Deployment] . 8.2.2. Connectivity map output formats and visualizations You can use various output formats, including txt , md , csv , json , and dot . The dot format is ideal for visualizing the output as a connectivity graph. It can be viewed using graph visualization software such as Graphviz tool , and extensions to VSCode . You can convert the dot output to formats such as svg , jpeg , or png using Graphviz, whether it is installed locally or through an online viewer. 8.2.3. Generating svg graphs from the dot output using Graphviz Follow these steps to create a graph in svg format from the dot output. Prerequisites Graphviz is installed on your local system. Procedure Run the following command to create the graph in svg format: USD dot -Tsvg connlist_output.dot > connlist_output_graph.svg The following are examples of the dot output and the resulting graph generated by Graphviz: Example 1: dot output Example 2: Graph generated by Graphviz 8.2.4. roxctl netpol connectivity map command options The roxctl netpol connectivity map command supports the following options: Option Description --fail Fail on the first encountered error. The default value is false . --focus-workload string Focus on connections of a specified workload name in the output. -h , --help View the help text for the roxctl netpol connectivity map command. -f , --output-file string Save the connections list output into a specific file. -o , --output-format string Configure the output format. The supported formats are txt , json , md , dot , and csv . The default value is txt . --remove Remove the output path if it already exists. The default value is false . --save-to-file Save the connections list output into a default file. The default value is false . --strict Treat warnings as errors. The default value is false . 8.3. Identifying the differences in allowed connections between project versions This command helps you understand the differences in allowed connections between two project versions. It analyses the workload and Kubernetes network policy manifests located in each version's directory and creates a representation of the differences in text format. You can view connectivity difference reports in a variety of output formats, including text , md , dot , and csv . 8.3.1. Generating connectivity difference reports with the roxctl netpol connectivity diff command To produce a connectivity difference report, the roxctl netpol connectivity diff command requires two folders, dir1 and dir2 , each containing Kubernetes manifests, including network policies. Procedure Run the following command to determine the connectivity differences between the Kubernetes manifests in the specified directories: USD roxctl netpol connectivity diff --dir1= <folder_path_1> --dir2= <folder_path_2> [flags] 1 1 Specify the path to the folders, which can include sub-folders that contain YAML resources and network policies for analysis. The command scans the entire sub-folder trees for both the directories. For example, <folder_path_1> is netpol-analysis-example-minimal/ and <folder_path_2> is netpol-diff-example-minimal/ . Optionally, you can also specify parameters to modify the behavior of the command. For more information about optional parameters, see roxctl netpol connectivity diff command options . Note The command considers all YAML files that you can accept using kubectl apply -f , and then these become valid inputs for your roxctl netpol connectivity diff command. Example 8.2. Example output diff-type source destination dir 1 dir 2 workloads-diff-info changed default/frontend[Deployment] default/backend[Deployment] TCP 9090 TCP 9090,UDP 53 added 0.0.0.0-255.255.255.255 default/backend[Deployment] No Connections TCP 9090 The semantic difference report gives you an overview of the connections that were changed, added, or removed in dir2 compared to the connections allowed in dir1 . When you review the output, each line represents one allowed connection that was added, removed, or changed in dir2 compared to dir1 . The following are example outputs generated by the roxctl netpol connectivity diff command in various formats: Example 1: text format Example 2: md format Example 3: svg graph generated from dot format Example 4: csv format If applicable, the workloads-diff-info provides additional details about added or removed workloads related to the added or removed connection. For example, if a connection from workload A to workload B is removed because workload B was deleted, the workloads-diff-info indicates that workload B was removed. However, if such a connection was removed only because of network policy changes and neither workload A nor B was deleted, the workloads-diff-info is empty. 8.3.2. roxctl netpol connectivity diff command options The roxctl netpol connectivity diff command supports the following options: Option Description --dir1 string First directory path of the input resources. This is a mandatory option. --dir2 string Second directory path of the input resources to be compared with the first directory path. This is a mandatory option. --fail Fail on the first encountered error. The default value is false . -h , --help View the help text for the roxctl netpol connectivity diff command. -f , --output-file string Save the connections difference output into a specific file. -o , --output-format string Configure the output format. The supported formats are txt , md , dot , and csv . The default value is txt . --remove Remove the output path if it already exists. The default value is false . --save-to-file Save the connections difference output into default a file. The default value is false . --strict Treat warnings as errors. The default value is false . 8.3.3. Distinguishing between syntactic and semantic difference outputs In the following example, dir1 is netpol-analysis-example-minimal/ , and dir2 is netpol-diff-example-minimal/ . The difference between the directories is a small change in the network policy backend-netpol . Example policy from dir1 : apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: creationTimestamp: null name: backend-netpol spec: ingress: - from: - podSelector: matchLabels: app: frontend ports: - port: 9090 protocol: TCP podSelector: matchLabels: app: backendservice policyTypes: - Ingress - Egress status: {} The change in dir2 is an added - before the ports attribute, which produces a difference output. 8.3.3.1. Syntactic difference output Procedure Run the following command to compare the contents of the netpols.yaml files in the two specified directories: Example output 12c12 < - ports: --- > ports: 8.3.3.2. Semantic difference output Procedure Run the following command to analyze the connectivity differences between the Kubernetes manifests and network policies in the two specified directories: USD roxctl netpol connectivity diff --dir1=roxctl/netpol/connectivity/diff/testdata/netpol-analysis-example-minimal/ --dir2=roxctl/netpol/connectivity/diff/testdata/netpol-diff-example-minimal Example output Connectivity diff: diff-type: changed, source: default/frontend[Deployment], destination: default/backend[Deployment], dir1: TCP 9090, dir2: TCP 9090,UDP 53 diff-type: added, source: 0.0.0.0-255.255.255.255, destination: default/backend[Deployment], dir1: No Connections, dir2: TCP 9090
[ "(http(s)?://)?<svc>(.<ns>(.svc.cluster.local)?)?(:<portNum>)? 1", "roxctl netpol generate -h", "roxctl netpol generate <folder_path> [flags] 1", "roxctl netpol connectivity map <folder_path> [flags] 1", "dot -Tsvg connlist_output.dot > connlist_output_graph.svg", "roxctl netpol connectivity diff --dir1= <folder_path_1> --dir2= <folder_path_2> [flags] 1", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: creationTimestamp: null name: backend-netpol spec: ingress: - from: - podSelector: matchLabels: app: frontend ports: - port: 9090 protocol: TCP podSelector: matchLabels: app: backendservice policyTypes: - Ingress - Egress status: {}", "diff netpol-diff-example-minimal/netpols.yaml netpol-analysis-example-minimal/netpols.yaml", "12c12 < - ports: --- > ports:", "roxctl netpol connectivity diff --dir1=roxctl/netpol/connectivity/diff/testdata/netpol-analysis-example-minimal/ --dir2=roxctl/netpol/connectivity/diff/testdata/netpol-diff-example-minimal", "Connectivity diff: diff-type: changed, source: default/frontend[Deployment], destination: default/backend[Deployment], dir1: TCP 9090, dir2: TCP 9090,UDP 53 diff-type: added, source: 0.0.0.0-255.255.255.255, destination: default/backend[Deployment], dir1: No Connections, dir2: TCP 9090" ]
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/operating/build-time-network-policy-tools
Chapter 3. Differences between OpenShift Container Platform 3 and 4
Chapter 3. Differences between OpenShift Container Platform 3 and 4 OpenShift Container Platform 4.9 introduces architectural changes and enhancements/ The procedures that you used to manage your OpenShift Container Platform 3 cluster might not apply to OpenShift Container Platform 4. For information on configuring your OpenShift Container Platform 4 cluster, review the appropriate sections of the OpenShift Container Platform documentation. For information on new features and other notable technical changes, review the OpenShift Container Platform 4.9 release notes . It is not possible to upgrade your existing OpenShift Container Platform 3 cluster to OpenShift Container Platform 4. You must start with a new OpenShift Container Platform 4 installation. Tools are available to assist in migrating your control plane settings and application workloads. 3.1. Architecture With OpenShift Container Platform 3, administrators individually deployed Red Hat Enterprise Linux (RHEL) hosts, and then installed OpenShift Container Platform on top of these hosts to form a cluster. Administrators were responsible for properly configuring these hosts and performing updates. OpenShift Container Platform 4 represents a significant change in the way that OpenShift Container Platform clusters are deployed and managed. OpenShift Container Platform 4 includes new technologies and functionality, such as Operators, machine sets, and Red Hat Enterprise Linux CoreOS (RHCOS), which are core to the operation of the cluster. This technology shift enables clusters to self-manage some functions previously performed by administrators. This also ensures platform stability and consistency, and simplifies installation and scaling. For more information, see OpenShift Container Platform architecture . Immutable infrastructure OpenShift Container Platform 4 uses Red Hat Enterprise Linux CoreOS (RHCOS), which is designed to run containerized applications, and provides efficient installation, Operator-based management, and simplified upgrades. RHCOS is an immutable container host, rather than a customizable operating system like RHEL. RHCOS enables OpenShift Container Platform 4 to manage and automate the deployment of the underlying container host. RHCOS is a part of OpenShift Container Platform, which means that everything runs inside a container and is deployed using OpenShift Container Platform. In OpenShift Container Platform 4, control plane nodes must run RHCOS, ensuring that full-stack automation is maintained for the control plane. This makes rolling out updates and upgrades a much easier process than in OpenShift Container Platform 3. For more information, see Red Hat Enterprise Linux CoreOS (RHCOS) . Operators Operators are a method of packaging, deploying, and managing a Kubernetes application. Operators ease the operational complexity of running another piece of software. They watch over your environment and use the current state to make decisions in real time. Advanced Operators are designed to upgrade and react to failures automatically. For more information, see Understanding Operators . 3.2. Installation and upgrade Installation process To install OpenShift Container Platform 3.11, you prepared your Red Hat Enterprise Linux (RHEL) hosts, set all of the configuration values your cluster needed, and then ran an Ansible playbook to install and set up your cluster. In OpenShift Container Platform 4.9, you use the OpenShift installation program to create a minimum set of resources required for a cluster. After the cluster is running, you use Operators to further configure your cluster and to install new services. After first boot, Red Hat Enterprise Linux CoreOS (RHCOS) systems are managed by the Machine Config Operator (MCO) that runs in the OpenShift Container Platform cluster. For more information, see Installation process . If you want to add Red Hat Enterprise Linux (RHEL) worker machines to your OpenShift Container Platform 4.9 cluster, you use an Ansible playbook to join the RHEL worker machines after the cluster is running. For more information, see Adding RHEL compute machines to an OpenShift Container Platform cluster . Infrastructure options In OpenShift Container Platform 3.11, you installed your cluster on infrastructure that you prepared and maintained. In addition to providing your own infrastructure, OpenShift Container Platform 4 offers an option to deploy a cluster on infrastructure that the OpenShift Container Platform installation program provisions and the cluster maintains. For more information, see OpenShift Container Platform installation overview . Upgrading your cluster In OpenShift Container Platform 3.11, you upgraded your cluster by running Ansible playbooks. In OpenShift Container Platform 4.9, the cluster manages its own updates, including updates to Red Hat Enterprise Linux CoreOS (RHCOS) on cluster nodes. You can easily upgrade your cluster by using the web console or by using the oc adm upgrade command from the OpenShift CLI and the Operators will automatically upgrade themselves. If your OpenShift Container Platform 4.9 cluster has RHEL worker machines, then you will still need to run an Ansible playbook to upgrade those worker machines. For more information, see Updating clusters . 3.3. Migration considerations Review the changes and other considerations that might affect your transition from OpenShift Container Platform 3.11 to OpenShift Container Platform 4. 3.3.1. Storage considerations Review the following storage changes to consider when transitioning from OpenShift Container Platform 3.11 to OpenShift Container Platform 4.9. Local volume persistent storage Local storage is only supported by using the Local Storage Operator in OpenShift Container Platform 4.9. It is not supported to use the local provisioner method from OpenShift Container Platform 3.11. For more information, see Persistent storage using local volumes . FlexVolume persistent storage The FlexVolume plugin location changed from OpenShift Container Platform 3.11. The new location in OpenShift Container Platform 4.9 is /etc/kubernetes/kubelet-plugins/volume/exec . Attachable FlexVolume plugins are no longer supported. For more information, see Persistent storage using FlexVolume . Container Storage Interface (CSI) persistent storage Persistent storage using the Container Storage Interface (CSI) was Technology Preview in OpenShift Container Platform 3.11. OpenShift Container Platform 4.9 ships with several CSI drivers . You can also install your own driver. For more information, see Persistent storage using the Container Storage Interface (CSI) . Red Hat OpenShift Container Storage Red Hat OpenShift Container Storage 3, which is available for use with OpenShift Container Platform 3.11, uses Red Hat Gluster Storage as the backing storage. Red Hat OpenShift Container Storage 4, which is available for use with OpenShift Container Platform 4, uses Red Hat Ceph Storage as the backing storage. For more information, see Persistent storage using Red Hat OpenShift Container Storage and the interoperability matrix article. Unsupported persistent storage options Support for the following persistent storage options from OpenShift Container Platform 3.11 has changed in OpenShift Container Platform 4.9: GlusterFS is no longer supported. CephFS as a standalone product is no longer supported. Ceph RBD as a standalone product is no longer supported. If you used one of these in OpenShift Container Platform 3.11, you must choose a different persistent storage option for full support in OpenShift Container Platform 4.9. For more information, see Understanding persistent storage . 3.3.2. Networking considerations Review the following networking changes to consider when transitioning from OpenShift Container Platform 3.11 to OpenShift Container Platform 4.9. Network isolation mode The default network isolation mode for OpenShift Container Platform 3.11 was ovs-subnet , though users frequently switched to use ovn-multitenant . The default network isolation mode for OpenShift Container Platform 4.9 is controlled by a network policy. If your OpenShift Container Platform 3.11 cluster used the ovs-subnet or ovs-multitenant mode, it is recommended to switch to a network policy for your OpenShift Container Platform 4.9 cluster. Network policies are supported upstream, are more flexible, and they provide the functionality that ovs-multitenant does. If you want to maintain the ovs-multitenant behavior while using a network policy in OpenShift Container Platform 4.9, follow the steps to configure multitenant isolation using network policy . For more information, see About network policy . 3.3.3. Logging considerations Review the following logging changes to consider when transitioning from OpenShift Container Platform 3.11 to OpenShift Container Platform 4.9. Deploying OpenShift Logging OpenShift Container Platform 4 provides a simple deployment mechanism for OpenShift Logging, by using a Cluster Logging custom resource. For more information, see Installing OpenShift Logging . Aggregated logging data You cannot transition your aggregate logging data from OpenShift Container Platform 3.11 into your new OpenShift Container Platform 4 cluster. For more information, see About OpenShift Logging . Unsupported logging configurations Some logging configurations that were available in OpenShift Container Platform 3.11 are no longer supported in OpenShift Container Platform 4.9. For more information on the explicitly unsupported logging cases, see Maintenance and support . 3.3.4. Security considerations Review the following security changes to consider when transitioning from OpenShift Container Platform 3.11 to OpenShift Container Platform 4.9. Unauthenticated access to discovery endpoints In OpenShift Container Platform 3.11, an unauthenticated user could access the discovery endpoints (for example, /api/* and /apis/* ). For security reasons, unauthenticated access to the discovery endpoints is no longer allowed in OpenShift Container Platform 4.9. If you do need to allow unauthenticated access, you can configure the RBAC settings as necessary; however, be sure to consider the security implications as this can expose internal cluster components to the external network. Identity providers Configuration for identity providers has changed for OpenShift Container Platform 4, including the following notable changes: The request header identity provider in OpenShift Container Platform 4.9 requires mutual TLS, where in OpenShift Container Platform 3.11 it did not. The configuration of the OpenID Connect identity provider was simplified in OpenShift Container Platform 4.9. It now obtains data, which previously had to specified in OpenShift Container Platform 3.11, from the provider's /.well-known/openid-configuration endpoint. For more information, see Understanding identity provider configuration . OAuth token storage format Newly created OAuth HTTP bearer tokens no longer match the names of their OAuth access token objects. The object names are now a hash of the bearer token and are no longer sensitive. This reduces the risk of leaking sensitive information. 3.3.5. Monitoring considerations Review the following monitoring changes to consider when transitioning from OpenShift Container Platform 3.11 to OpenShift Container Platform 4.9. Alert for monitoring infrastructure availability The default alert that triggers to ensure the availability of the monitoring structure was called DeadMansSwitch in OpenShift Container Platform 3.11. This was renamed to Watchdog in OpenShift Container Platform 4. If you had PagerDuty integration set up with this alert in OpenShift Container Platform 3.11, you must set up the PagerDuty integration for the Watchdog alert in OpenShift Container Platform 4. For more information, see Applying custom Alertmanager configuration .
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/migrating_from_version_3_to_4/planning-migration-3-4
Chapter 2. Mirroring container images for disconnected installations
Chapter 2. Mirroring container images for disconnected installations You can use a custom container registry when you deploy MicroShift in a disconnected network. Running your cluster in a restricted network without direct internet connectivity is possible by installing the cluster from a mirrored set of container images in a private registry. 2.1. Mirror container images into an existing registry Using a custom air-gapped container registry, or mirror, is necessary with certain user environments and workload requirements. Mirroring allows for the transfer of container images and updates to air-gapped environments where they can be installed on a MicroShift instance. To create an air-gapped mirror registry for MicroShift containers, you must complete the following steps: Get the container image list to be mirrored. Configure the mirroring prerequisites. Download images on a host with internet access. Copy the downloaded image directory to an air-gapped site. Upload images to a mirror registry in an air-gapped site. Configure your MicroShift hosts to use the mirror registry. Additional resources Creating a mirror registry with mirror registry for Red Hat OpenShift 2.2. Getting the mirror registry container image list To use a mirror registry, you must know which container image references are used by a specific version of MicroShift. These references are provided in the release-<arch>.json files that are part of the microshift-release-info RPM package. Note To mirror the Operator Lifecycle Manager (OLM) in disconnected environments, add the references provided in the release-olm-USDARCH.json that is included in the microshift-olm RPM and follow the same procedure. Use oc-mirror for mirroring Operator catalogs and Operators. Prerequisites You have installed jq. Procedure Access the list of container image references by using one of the following methods: If the package is installed on the MicroShift host, get the location of the files by running the following command: USD rpm -ql microshift-release-info Example output /usr/share/microshift/release/release-x86_64.json If the package is not installed on a MicroShift host, download and unpack the RPM package without installing it by running the following command: USD rpm2cpio microshift-release-info*.noarch.rpm | cpio -idmv Example output /usr/share/microshift/release/release-x86_64.json Extract the list of container images into the microshift-container-refs.txt file by running the following commands: USD RELEASE_FILE=/usr/share/microshift/release/release-USD(uname -m).json USD jq -r '.images | .[]' USD{RELEASE_FILE} > microshift-container-refs.txt Note After the microshift-container-refs.txt file is created with the MicroShift container image list, you can append the file with other user-specific image references before running the mirroring procedure. 2.3. Configuring mirroring prerequisites You must create a container image registry credentials file that allows the mirroring of images from your internet-connected mirror host to your air-gapped mirror. Follow the instructions in the "Configuring credentials that allow images to be mirrored" link provided in the "Additional resources" section. These instructions guide you to create a ~/.pull-secret-mirror.json file on the mirror registry host that includes the user credentials for accessing the mirror. 2.3.1. Example mirror registry pull secret entry For example, the following section is added to the pull secret file for the microshift_quay:8443 mirror registry using microshift:microshift as username and password. Example mirror registry section for pull secret file "<microshift_quay:8443>": { 1 "auth": "<microshift_auth>", 2 "email": "<[email protected]>" 3 }, 1 Replace the <registry_host>:<port> value microshift_quay:8443 with the host name and port of your mirror registry server. 2 Replace the <microshift_auth> value with the user password. 3 Replace the </[email protected]> value with the user email. Additional resources Configuring credentials that allow images to be mirrored 2.4. Downloading container images After you have located the container list and completed the mirroring prerequisites, download the container images to a host with internet access. Prerequisites You are logged into a host with access to the internet. The .pull-secret-mirror.json file and microshift-containers directory contents are available locally. Procedure Install the skopeo tool used for copying the container images by running the following command: USD sudo dnf install -y skopeo Set the environment variable that points to the pull secret file: USD PULL_SECRET_FILE=~/.pull-secret-mirror.json Set the environment variable that points to the list of container images: USD IMAGE_LIST_FILE=~/microshift-container-refs.txt Set the environment variable that points to the destination directory for storing the downloaded data: USD IMAGE_LOCAL_DIR=~/microshift-containers Run the following script to download the container images to the USD{IMAGE_LOCAL_DIR} directory: while read -r src_img ; do # Remove the source registry prefix dst_img=USD(echo "USD{src_img}" | cut -d '/' -f 2-) # Run the image download command echo "Downloading 'USD{src_img}' to 'USD{IMAGE_LOCAL_DIR}'" mkdir -p "USD{IMAGE_LOCAL_DIR}/USD{dst_img}" skopeo copy --all --quiet \ --preserve-digests \ --authfile "USD{PULL_SECRET_FILE}" \ docker://"USD{src_img}" dir://"USD{IMAGE_LOCAL_DIR}/USD{dst_img}" done < "USD{IMAGE_LIST_FILE}" 2.5. Uploading container images to a mirror registry To use your container images at an air-gapped site, upload them to the mirror registry using the following procedure. Prerequisites You are logged into a host with access to microshift-quay . The .pull-secret-mirror.json file is available locally. The microshift-containers directory contents are available locally. Procedure Install the skopeo tool used for copying the container images by running the following command: USD sudo dnf install -y skopeo Set the environment variables pointing to the pull secret file: USD IMAGE_PULL_FILE=~/.pull-secret-mirror.json Set the environment variables pointing to the local container image directory: USD IMAGE_LOCAL_DIR=~/microshift-containers Set the environment variables pointing to the mirror registry URL for uploading the container images: USD TARGET_REGISTRY= <registry_host>:<port> 1 1 Replace <registry_host>:<port> with the host name and port of your mirror registry server. Run the following script to upload the container images to the USD{TARGET_REGISTRY} mirror registry: pushd "USD{IMAGE_LOCAL_DIR}" >/dev/null while read -r src_manifest ; do local src_img src_img=USD(dirname "USD{src_manifest}") # Add the target registry prefix and remove SHA local -r dst_img="USD{TARGET_REGISTRY}/USD{src_img}" local -r dst_img_no_tag="USD{TARGET_REGISTRY}/USD{src_img%%[@:]*}" # Run the image upload echo "Uploading 'USD{src_img}' to 'USD{dst_img}'" skopeo copy --all --quiet \ --preserve-digests \ --authfile "USD{IMAGE_PULL_FILE}" \ dir://"USD{IMAGE_LOCAL_DIR}/USD{src_img}" docker://"USD{dst_img}" done < <(find . -type f -name manifest.json -printf '%P\n') popd >/dev/null 2.6. Configuring hosts for mirror registry access To configure a MicroShift host to use a mirror registry, you must give the MicroShift host access to the registry by creating a configuration file that maps the Red Hat registry host names to the mirror. Prerequisites Your mirror host has access to the internet. The mirror host can access the mirror registry. You configured the mirror registry for use in your restricted network. You downloaded the pull secret and modified it to include authentication to your mirror repository. Procedure Log into your MicroShift host. Enable the SSL certificate trust on any host accessing the mirror registry by completing the following steps: Copy the rootCA.pem file from the mirror registry, for example, <registry_path>/quay-rootCA , to the MicroShift host at the /etc/pki/ca-trust/source/anchors directory. Enable the certificate in the system-wide trust store configuration by running the following command: USD sudo update-ca-trust Create the /etc/containers/registries.conf.d/999-microshift-mirror.conf configuration file that maps the Red Hat registry host names to the mirror registry: Example mirror configuration file [[registry]] prefix = "" location = "<registry_host>:<port>" 1 mirror-by-digest-only = true insecure = false [[registry]] prefix = "" location = "quay.io" mirror-by-digest-only = true [[registry.mirror]] location = "<registry_host>:<port>" insecure = false [[registry]] prefix = "" location = "registry.redhat.io" mirror-by-digest-only = true [[registry.mirror]] location = "<registry_host>:<port>" insecure = false [[registry]] prefix = "" location = "registry.access.redhat.com" mirror-by-digest-only = true [[registry.mirror]] location = "<registry_host>:<port>" insecure = false 1 Replace <registry_host>:<port> with the host name and port of your mirror registry server, for example, <microshift-quay:8443> . Enable the MicroShift service by running the following command: USD sudo systemctl enable microshift Reboot the host by running the following command: USD sudo reboot
[ "rpm -ql microshift-release-info", "/usr/share/microshift/release/release-x86_64.json", "rpm2cpio microshift-release-info*.noarch.rpm | cpio -idmv", "/usr/share/microshift/release/release-x86_64.json", "RELEASE_FILE=/usr/share/microshift/release/release-USD(uname -m).json", "jq -r '.images | .[]' USD{RELEASE_FILE} > microshift-container-refs.txt", "\"<microshift_quay:8443>\": { 1 \"auth\": \"<microshift_auth>\", 2 \"email\": \"<[email protected]>\" 3 },", "sudo dnf install -y skopeo", "PULL_SECRET_FILE=~/.pull-secret-mirror.json", "IMAGE_LIST_FILE=~/microshift-container-refs.txt", "IMAGE_LOCAL_DIR=~/microshift-containers", "while read -r src_img ; do # Remove the source registry prefix dst_img=USD(echo \"USD{src_img}\" | cut -d '/' -f 2-) # Run the image download command echo \"Downloading 'USD{src_img}' to 'USD{IMAGE_LOCAL_DIR}'\" mkdir -p \"USD{IMAGE_LOCAL_DIR}/USD{dst_img}\" skopeo copy --all --quiet --preserve-digests --authfile \"USD{PULL_SECRET_FILE}\" docker://\"USD{src_img}\" dir://\"USD{IMAGE_LOCAL_DIR}/USD{dst_img}\" done < \"USD{IMAGE_LIST_FILE}\"", "sudo dnf install -y skopeo", "IMAGE_PULL_FILE=~/.pull-secret-mirror.json", "IMAGE_LOCAL_DIR=~/microshift-containers", "TARGET_REGISTRY= <registry_host>:<port> 1", "pushd \"USD{IMAGE_LOCAL_DIR}\" >/dev/null while read -r src_manifest ; do local src_img src_img=USD(dirname \"USD{src_manifest}\") # Add the target registry prefix and remove SHA local -r dst_img=\"USD{TARGET_REGISTRY}/USD{src_img}\" local -r dst_img_no_tag=\"USD{TARGET_REGISTRY}/USD{src_img%%[@:]*}\" # Run the image upload echo \"Uploading 'USD{src_img}' to 'USD{dst_img}'\" skopeo copy --all --quiet --preserve-digests --authfile \"USD{IMAGE_PULL_FILE}\" dir://\"USD{IMAGE_LOCAL_DIR}/USD{src_img}\" docker://\"USD{dst_img}\" done < <(find . -type f -name manifest.json -printf '%P\\n') popd >/dev/null", "sudo update-ca-trust", "[[registry]] prefix = \"\" location = \"<registry_host>:<port>\" 1 mirror-by-digest-only = true insecure = false [[registry]] prefix = \"\" location = \"quay.io\" mirror-by-digest-only = true [[registry.mirror]] location = \"<registry_host>:<port>\" insecure = false [[registry]] prefix = \"\" location = \"registry.redhat.io\" mirror-by-digest-only = true [[registry.mirror]] location = \"<registry_host>:<port>\" insecure = false [[registry]] prefix = \"\" location = \"registry.access.redhat.com\" mirror-by-digest-only = true [[registry.mirror]] location = \"<registry_host>:<port>\" insecure = false", "sudo systemctl enable microshift", "sudo reboot" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_microshift/4.18/html/embedding_in_a_rhel_for_edge_image/microshift-deploy-with-mirror-registry
Chapter 8. Networking
Chapter 8. Networking Both iptables and ip6tables services now recognize the security table in the set_policy() function Previously, when the security table was used, the iptables or ip6tables services failed to clear correctly the firewall ruleset during the shutdown. As a consequence, an error message was displayed when stopping these services. With this update, both iptables and ip6tables init scripts recognize but ignore the security table when clearing the firewall ruleset. As a result, the error message is no longer displayed in the described scenario. (BZ# 1210563 ) Unusual skbs no longer cause the kernel to crash Under a rare network condition, the TCP stack created and tried to transmit unusual socket buffers (skbs) . Previously, certain core kernel functions did not support such unusual skbs . As a consequence, the BUG() kernel message was displayed, and the kernel terminated unexpectedly. With this update, the relevant function is extended to support such kind of skbs , and the kernel no longer crashes. (BZ#1274139) The dmesg log no longer displays 'hw csum failure' with inbound IPv6 traffic Previously, when IPv6 fragments were received, the cxgb4 Network Interface Card (NIC) calculated wrong internet checksum. As a consequence, the kernel reported the 'hw csum failure' error message in the dmesg system log when receiving a fragmented IPv6 packet. With this update, the hardware checksum calculation happens only when IPv4 fragments are received. If IPv6 fragments are received, the checksum calculation happens in software. As a result, when IPv6 fragments are received, dmesg no longer displays the error message in the described scenario. (BZ#1427036) SCTP now selects the right source address Previously, when using a secondary IPv6 address, Stream Control Transmission Protocol (SCTP) selected the source address based on the best prefix matching with the destination address. As a consequence, in some cases, a packet was sent through an interface with the wrong IPv6 address. With this update, SCTP uses the address that already exists in the routing table for this specific route. As a result, SCTP uses the expected IPv6 address as the source address when secondary addresses are used on a host. (BZ#1445919) Improved performance of SCTP Previously, small data chunks caused the Stream Control Transmission Protocol (SCTP) to account the receiver_window (rwnd) values incorrectly when recovering from a zero-window situation . As a consequence, window updates were not sent to the peer, and an artificial growth of rwnd could lead to packet drops. This update properly accounts such small data chunks and ignores the rwnd pressure values when reopening a window. As a result, window updates are now sent, and the announced rwnd reflects better the real state of the receive buffer. (BZ#1492220) The virtio interface now transmits the Ethernet packets correctly Previously, when a virtio Network Interface Card (NIC) received a short frame from the guest, the virtio interface stop transmitting any Ethernet packets. As a consequence, packets transmitted by the guest never appeared on the hypervisor virtual network (vnet) device. With this update, the kernel drops truncated packets, and the virtio interface transmits the packets correctly. (BZ#1535024)
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.10_technical_notes/bug_fixes_networking
Chapter 7. AWS Kinesis
Chapter 7. AWS Kinesis Both producer and consumer are supported The AWS2 Kinesis component supports receiving messages from and sending messages to Amazon Kinesis (no Batch supported) service. The AWS2 Kinesis component also supports Synchronous and Asynchronous Client. So if you need the connection (client) to be async, configure the 'asyncClient' option (can be found in DSL also) as true . Prerequisites You must have a valid Amazon Web Services developer account, and be signed up to use Amazon Kinesis. More information are available at AWS Kinesis . 7.1. Dependencies When using aws2-kinesis Red Hat build of Camel Spring Boot, add the following Maven dependency to your pom.xml to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-aws2-kinesis-starter</artifactId> </dependency> 7.2. URI Format The stream needs to be created prior to it being used. You can append query options to the URI in the following format, ?options=value&option2=value&... 7.3. Configuring Options Camel components are configured on two levels: Component level Endpoint level 7.3.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 7.3.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 7.4. Component Options The AWS Kinesis component supports 28 options, which are listed below. Name Description Default Type amazonKinesisClient (common) Autowired Amazon Kinesis client to use for all requests for this endpoint. KinesisClient cborEnabled (common) This option will set the CBOR_ENABLED property during the execution. true boolean configuration (common) Component configuration. Kinesis2Configuration overrideEndpoint (common) Set the need for overidding the endpoint. This option needs to be used in combination with uriEndpointOverride option. false boolean proxyHost (common) To define a proxy host when instantiating the Kinesis client. String proxyPort (common) To define a proxy port when instantiating the Kinesis client. Integer proxyProtocol (common) To define a proxy protocol when instantiating the Kinesis client. Enum values: HTTP HTTPS HTTPS Protocol region (common) The region in which Kinesis Firehose client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example ap-east-1) You'll need to use the name Region.EU_WEST_1.id(). String trustAllCertificates (common) If we want to trust all certificates in case of overriding the endpoint. false boolean uriEndpointOverride (common) Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option. String useDefaultCredentialsProvider (common) Set whether the Kinesis client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in. false boolean bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean iteratorType (consumer) Defines where in the Kinesis stream to start getting records. Enum values: AT_SEQUENCE_NUMBER AFTER_SEQUENCE_NUMBER TRIM_HORIZON LATEST AT_TIMESTAMP null TRIM_HORIZON ShardIteratorType maxResultsPerRequest (consumer) Maximum number of records that will be fetched in each poll. 1 int resumeStrategy (consumer) Defines a resume strategy for AWS Kinesis. The default strategy reads the sequenceNumber if provided. KinesisUserConfigurationResumeStrategy KinesisResumeStrategy sequenceNumber (consumer) The sequence number to start polling from. Required if iteratorType is set to AFTER_SEQUENCE_NUMBER or AT_SEQUENCE_NUMBER. String shardClosed (consumer) Define what will be the behavior in case of shard closed. Possible value are ignore, silent and fail. In case of ignore a message will be logged and the consumer will restart from the beginning,in case of silent there will be no logging and the consumer will start from the beginning,in case of fail a ReachedClosedStateException will be raised. Enum values: ignore fail silent ignore Kinesis2ShardClosedStrategyEnum shardId (consumer) Defines which shardId in the Kinesis stream to get records from. String lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean asyncClient (advanced) If we want to a KinesisAsyncClient instance set it to true. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean healthCheckConsumerEnabled (health) Used for enabling or disabling all consumer based health checks from this component. true boolean healthCheckProducerEnabled (health) Used for enabling or disabling all producer based health checks from this component. NOTE: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true . true boolean accessKey (security) Amazon AWS Access Key. String profileCredentialsName (security) If using a profile credentials provider this parameter will set the profile name. String secretKey (security) Amazon AWS Secret Key. String sessionToken (security) Amazon AWS Session Token used when the user needs to assume a IAM role. String trustAllCertificates (security) If we want to trust all certificates in case of overriding the endpoint. false boolean useDefaultCredentialsProvider (security) Set whether the Kinesis client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in. false boolean useProfileCredentialsProvider (security) Set whether the Kinesis client should expect to load credentials through a profile credentials provider. false boolean useSessionCredentials (security) Set whether the Kinesis client should expect to use Session Credentials. This is useful in situation in which the user needs to assume a IAM role for doing operations in Kinesis. false boolean 7.5. Endpoint Options The AWS Kinesis endpoint is configured using URI syntax: with the following path and query parameters: 7.5.1. Path Parameters (1 parameters) Name Description Default Type streamName (common) Required Name of the stream. String 7.5.2. Query Parameters (42 parameters) Name Description Default Type amazonKinesisClient (common) Autowired Amazon Kinesis client to use for all requests for this endpoint. KinesisClient cborEnabled (common) This option will set the CBOR_ENABLED property during the execution. true boolean overrideEndpoint (common) Set the need for overidding the endpoint. This option needs to be used in combination with uriEndpointOverride option. false boolean proxyHost (common) To define a proxy host when instantiating the Kinesis client. String proxyPort (common) To define a proxy port when instantiating the Kinesis client. Integer proxyProtocol (common) To define a proxy protocol when instantiating the Kinesis client. Enum values: HTTP HTTPS HTTPS Protocol region (common) The region in which Kinesis Firehose client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example ap-east-1) You'll need to use the name Region.EU_WEST_1.id(). String trustAllCertificates (common) If we want to trust all certificates in case of overriding the endpoint. false boolean uriEndpointOverride (common) Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option. String useDefaultCredentialsProvider (common) Set whether the Kinesis client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in. false boolean bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean iteratorType (consumer) Defines where in the Kinesis stream to start getting records. Enum values: AT_SEQUENCE_NUMBER AFTER_SEQUENCE_NUMBER TRIM_HORIZON LATEST AT_TIMESTAMP null TRIM_HORIZON ShardIteratorType maxResultsPerRequest (consumer) Maximum number of records that will be fetched in each poll. 1 int resumeStrategy (consumer) Defines a resume strategy for AWS Kinesis. The default strategy reads the sequenceNumber if provided. KinesisUserConfigurationResumeStrategy KinesisResumeStrategy sendEmptyMessageWhenIdle (consumer) If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead. false boolean sequenceNumber (consumer) The sequence number to start polling from. Required if iteratorType is set to AFTER_SEQUENCE_NUMBER or AT_SEQUENCE_NUMBER. String shardClosed (consumer) Define what will be the behavior in case of shard closed. Possible value are ignore, silent and fail. In case of ignore a message will be logged and the consumer will restart from the beginning,in case of silent there will be no logging and the consumer will start from the beginning,in case of fail a ReachedClosedStateException will be raised. Enum values: ignore fail silent ignore Kinesis2ShardClosedStrategyEnum shardId (consumer) Defines which shardId in the Kinesis stream to get records from. String exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern pollStrategy (consumer (advanced)) A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel. PollingConsumerPollStrategy lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean asyncClient (advanced) If we want to a KinesisAsyncClient instance set it to true. false boolean backoffErrorThreshold (scheduler) The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in. int backoffIdleThreshold (scheduler) The number of subsequent idle polls that should happen before the backoffMultipler should kick-in. int backoffMultiplier (scheduler) To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured. int delay (scheduler) Milliseconds before the poll. 500 long greedy (scheduler) If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the run polled 1 or more messages. false boolean initialDelay (scheduler) Milliseconds before the first poll starts. 1000 long repeatCount (scheduler) Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever. 0 long runLoggingLevel (scheduler) The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that. Enum values: TRACE DEBUG INFO WARN ERROR OFF TRACE LoggingLevel scheduledExecutorService (scheduler) Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool. ScheduledExecutorService scheduler (scheduler) To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler. none Object schedulerProperties (scheduler) To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler. Map startScheduler (scheduler) Whether the scheduler should be auto started. true boolean timeUnit (scheduler) Time unit for initialDelay and delay options. Enum values: NANOSECONDS MICROSECONDS MILLISECONDS SECONDS MINUTES HOURS DAYS MILLISECONDS TimeUnit useFixedDelay (scheduler) Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details. true boolean accessKey (security) Amazon AWS Access Key. String profileCredentialsName (security) If using a profile credentials provider this parameter will set the profile name. String secretKey (security) Amazon AWS Secret Key. String sessionToken (security) Amazon AWS Session Token used when the user needs to assume a IAM role. String trustAllCertificates (security) If we want to trust all certificates in case of overriding the endpoint. false boolean useDefaultCredentialsProvider (security) Set whether the Kinesis client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in. false boolean useProfileCredentialsProvider (security) Set whether the Kinesis client should expect to load credentials through a profile credentials provider. false boolean useSessionCredentials (security) Set whether the Kinesis client should expect to use Session Credentials. This is useful in situation in which the user needs to assume a IAM role for doing operations in Kinesis. false boolean Required Kinesis component options You have to provide the KinesisClient in the Registry with proxies and relevant credentials configured. 7.6. Batch Consumer This component implements the Batch Consumer. This allows you for instance to know how many messages exists in this batch and for instance let the Aggregator aggregate this number of messages. The consumer is able to consume either from a single specific shard or all available shards (multiple shards consumption) of Amazon Kinesis. Therefore, if you leave the 'shardId' property in the DSL configuration empty, then it'll consume all available shards otherwise only the specified shard corresponding to the shardId will be consumed. 7.7. Usage 7.7.1. Static credentials vs Default Credential Provider You have the possibility of avoiding the usage of explicit static credentials, by specifying the useDefaultCredentialsProvider option and set it to true. The order of evaluation for Default Credentials Provider is the following: Java system properties - aws.accessKeyId and aws.secretKey Environment variables - AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY . Web Identity Token from AWS STS. The shared credentials and config files. Amazon ECS container credentials - loaded from the Amazon ECS if the environment variable AWS_CONTAINER_CREDENTIALS_RELATIVE_URI is set. Amazon EC2 Instance profile credentials. You have also the possibility of using Profile Credentials Provider, by specifying the useProfileCredentialsProvider option to true and profileCredentialsName to the profile name. Only one of static, default and profile credentials could be used at the same time. For more information see AWS credentials documentation . 7.8. Message headers Name Description CamelAwsKinesisSequenceNumber (common) Constant: SEQUENCE_NUMBER The sequence number of the record as defined in PutRecord syntax . CamelAwsKinesisApproximateArrivalTimestamp (common) Constant: APPROX_ARRIVAL_TIME The time AWS assigned as the arrival time of the record. CamelAwsKinesisPartitionKey (common) Constant: PARTITION_KEY Identifies which shard in the stream the data record is assigned to. CamelMessageTimestamp (common) Constant: MESSAGE_TIMESTAMP The timestamp of the message. CamelAwsKinesisShardId (common) Constant: SHARD_ID The shard ID of the shard where the data record was place 7.8.1. AmazonKinesis configuration You then have to reference the KinesisClient in the amazonKinesisClient URI option. from("aws2-kinesis://mykinesisstream?amazonKinesisClient=#kinesisClient") .to("log:out?showAll=true"); 7.8.2. Providing AWS Credentials It is recommended that the credentials are obtained by using the DefaultAWSCredentialsProviderChain that is the default when creating a new ClientConfiguration instance, however, a different AWSCredentialsProvider can be specified when calling createClient(... ). 7.8.3. AWS Kinesis KCL Consumer The component supports also the KCL (Kinesis Client Library) for consuming from a Kinesis Data Stream. To enable this feature, set two different parameters in your endpoint: from("aws2-kinesis://mykinesisstream?asyncClient=true&useDefaultCredentialsProvider=true&useKclConsumers=true") .to("log:out?showAll=true"); This feature makes it possible to automatically checkpoint the Shard Iterations by combining the usage of KCL, DynamoDB Table and CloudWatch alarms. This works out of the box, by simply using your AWS Credentials. Note AWS Kinesis consumer with KCL needs approximately 60-70 seconds to startup. 7.9. Spring Boot Auto-Configuration The component supports 50 options, which are listed below. Name Description Default Type camel.component.aws2-kinesis-firehose.access-key Amazon AWS Access Key. String camel.component.aws2-kinesis-firehose.amazon-kinesis-firehose-client Amazon Kinesis Firehose client to use for all requests for this endpoint. The option is a software.amazon.awssdk.services.firehose.FirehoseClient type. FirehoseClient camel.component.aws2-kinesis-firehose.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.aws2-kinesis-firehose.cbor-enabled This option will set the CBOR_ENABLED property during the execution. true Boolean camel.component.aws2-kinesis-firehose.configuration Component configuration. The option is a org.apache.camel.component.aws2.firehose.KinesisFirehose2Configuration type. KinesisFirehose2Configuration camel.component.aws2-kinesis-firehose.enabled Whether to enable auto configuration of the aws2-kinesis-firehose component. This is enabled by default. Boolean camel.component.aws2-kinesis-firehose.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.aws2-kinesis-firehose.operation The operation to do in case the user don't want to send only a record. KinesisFirehose2Operations camel.component.aws2-kinesis-firehose.override-endpoint Set the need for overidding the endpoint. This option needs to be used in combination with uriEndpointOverride option. false Boolean camel.component.aws2-kinesis-firehose.profile-credentials-name If using a profile credentials provider this parameter will set the profile name. String camel.component.aws2-kinesis-firehose.proxy-host To define a proxy host when instantiating the Kinesis Firehose client. String camel.component.aws2-kinesis-firehose.proxy-port To define a proxy port when instantiating the Kinesis Firehose client. Integer camel.component.aws2-kinesis-firehose.proxy-protocol To define a proxy protocol when instantiating the Kinesis Firehose client. Protocol camel.component.aws2-kinesis-firehose.region The region in which Kinesis Firehose client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example ap-east-1) You'll need to use the name Region.EU_WEST_1.id(). String camel.component.aws2-kinesis-firehose.secret-key Amazon AWS Secret Key. String camel.component.aws2-kinesis-firehose.session-token Amazon AWS Session Token used when the user needs to assume a IAM role. String camel.component.aws2-kinesis-firehose.trust-all-certificates If we want to trust all certificates in case of overriding the endpoint. false Boolean camel.component.aws2-kinesis-firehose.uri-endpoint-override Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option. String camel.component.aws2-kinesis-firehose.use-default-credentials-provider Set whether the Kinesis Firehose client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in. false Boolean camel.component.aws2-kinesis-firehose.use-profile-credentials-provider Set whether the Kinesis Firehose client should expect to load credentials through a profile credentials provider. false Boolean camel.component.aws2-kinesis-firehose.use-session-credentials Set whether the Kinesis Firehose client should expect to use Session Credentials. This is useful in situation in which the user needs to assume a IAM role for doing operations in Kinesis Firehose. false Boolean camel.component.aws2-kinesis.access-key Amazon AWS Access Key. String camel.component.aws2-kinesis.amazon-kinesis-client Amazon Kinesis client to use for all requests for this endpoint. The option is a software.amazon.awssdk.services.kinesis.KinesisClient type. KinesisClient camel.component.aws2-kinesis.async-client If we want to a KinesisAsyncClient instance set it to true. false Boolean camel.component.aws2-kinesis.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.aws2-kinesis.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.aws2-kinesis.cbor-enabled This option will set the CBOR_ENABLED property during the execution. true Boolean camel.component.aws2-kinesis.configuration Component configuration. The option is a org.apache.camel.component.aws2.kinesis.Kinesis2Configuration type. Kinesis2Configuration camel.component.aws2-kinesis.enabled Whether to enable auto configuration of the aws2-kinesis component. This is enabled by default. Boolean camel.component.aws2-kinesis.health-check-consumer-enabled Used for enabling or disabling all consumer based health checks from this component. true Boolean camel.component.aws2-kinesis.health-check-producer-enabled Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true . true Boolean camel.component.aws2-kinesis.iterator-type Defines where in the Kinesis stream to start getting records. ShardIteratorType camel.component.aws2-kinesis.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.aws2-kinesis.max-results-per-request Maximum number of records that will be fetched in each poll. 1 Integer camel.component.aws2-kinesis.override-endpoint Set the need for overidding the endpoint. This option needs to be used in combination with uriEndpointOverride option. false Boolean camel.component.aws2-kinesis.profile-credentials-name If using a profile credentials provider this parameter will set the profile name. String camel.component.aws2-kinesis.proxy-host To define a proxy host when instantiating the Kinesis client. String camel.component.aws2-kinesis.proxy-port To define a proxy port when instantiating the Kinesis client. Integer camel.component.aws2-kinesis.proxy-protocol To define a proxy protocol when instantiating the Kinesis client. Protocol camel.component.aws2-kinesis.region The region in which Kinesis Firehose client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example ap-east-1) You'll need to use the name Region.EU_WEST_1.id(). String camel.component.aws2-kinesis.resume-strategy Defines a resume strategy for AWS Kinesis. The default strategy reads the sequenceNumber if provided. The option is a org.apache.camel.component.aws2.kinesis.consumer.KinesisResumeStrategy type. KinesisResumeStrategy camel.component.aws2-kinesis.secret-key Amazon AWS Secret Key. String camel.component.aws2-kinesis.sequence-number The sequence number to start polling from. Required if iteratorType is set to AFTER_SEQUENCE_NUMBER or AT_SEQUENCE_NUMBER. String camel.component.aws2-kinesis.session-token Amazon AWS Session Token used when the user needs to assume a IAM role. String camel.component.aws2-kinesis.shard-closed Define what will be the behavior in case of shard closed. Possible value are ignore, silent and fail. In case of ignore a message will be logged and the consumer will restart from the beginning,in case of silent there will be no logging and the consumer will start from the beginning,in case of fail a ReachedClosedStateException will be raised. Kinesis2ShardClosedStrategyEnum camel.component.aws2-kinesis.shard-id Defines which shardId in the Kinesis stream to get records from. String camel.component.aws2-kinesis.trust-all-certificates If we want to trust all certificates in case of overriding the endpoint. false Boolean camel.component.aws2-kinesis.uri-endpoint-override Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option. String camel.component.aws2-kinesis.use-default-credentials-provider Set whether the Kinesis client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in. false Boolean camel.component.aws2-kinesis.use-profile-credentials-provider Set whether the Kinesis client should expect to load credentials through a profile credentials provider. false Boolean camel.component.aws2-kinesis.use-session-credentials Set whether the Kinesis client should expect to use Session Credentials. This is useful in situation in which the user needs to assume a IAM role for doing operations in Kinesis. false Boolean
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-aws2-kinesis-starter</artifactId> </dependency>", "aws2-kinesis://stream-name[?options]", "aws2-kinesis:streamName", "from(\"aws2-kinesis://mykinesisstream?amazonKinesisClient=#kinesisClient\") .to(\"log:out?showAll=true\");", "from(\"aws2-kinesis://mykinesisstream?asyncClient=true&useDefaultCredentialsProvider=true&useKclConsumers=true\") .to(\"log:out?showAll=true\");" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-aws2-kinesis-component-starter
Chapter 5. New features and enhancements
Chapter 5. New features and enhancements 5.1. Red Hat Enterprise Linux 9.0 for SAP Solutions RHEL System Roles for SAP Ansible Core support for the RHEL System Roles As of the RHEL 9 GA release, Ansible Core is provided, with a limited scope of support, to enable RHEL supported automation use cases. Ansible Core replaces Ansible Engine which was provided on versions of RHEL in a separate repository. Ansible Core is available in the AppStream repository for RHEL. For more details on the supported use cases, see Scope of support for the Ansible Core package included in the RHEL 9 AppStream . If you require Ansible Engine support, or otherwise need support for non-RHEL automation use cases, create a Case at Red Hat Support. Full Support for role sap_hana_install With the role sap_hana_install , installing SAP HANA standalone or scale-out is simple and reliable and requires no interactive user input, and there is no need to learn how to configure the hdblcm configfile for doing an unattended installation. This role had initially been shipped in Technology Preview support and is now fully supported. SELinux file labeling for SAP The roles sap_general_preconfigure and sap_hana_preconfigure now support setting SELinux file labels for running SAP HANA or SAP ABAP application instances on RHEL systems with SELinux in enforcing or permissive mode. SAP HANA Pacemaker System roles have been enhanced to allow the setup of two-node SAP HANA pacemaker clusters. For Red Hat Enterprise Linux 9.0, it is provided as Technology Preview. For information on Red Hat's scope of support for Technology Preview features, see Technology Preview Features Support Scope . HA solutions for SAP SAP HANA Multitarget System Replication SAP HANA Multitarget System Replication is now supported in combination with the HA solution for managing SAP HANA Scale-Up System Replication. See Configuring SAP HANA Scale-Up Multitarget System Replication for disaster recovery for more information. resource-agents-sap-hana The following enhancements have been made in version 0.162.1: A new parameter, HANA_CALL_TIMEOUT`has been added. It fixes the issue of hard-coded timeouts for most `HANA_CALL commands. Provision of systemd support. Start and stop resource operation timeouts can now be used for increased WaitforStarted/WaitforStopped timeouts. The minimum timeout remains 3600s. The logging has been improved. The error handling has been improved. 5.2. Red Hat Enterprise Linux 9.3 for SAP Solutions HA solutions for SAP When using the HA solutions for managing HANA Multitarget System Replication, it is also possible to set up a separate inactive cluster for managing the HANA instances at the DR site, which can be activated manually in the event of the primary cluster becoming unavailable. For more details, please refer to Configuring SAP HANA Scale-Up Multitarget System Replication for disaster recovery . RHEL HA solutions for SAP now support managing SAP HANA Multitarget System Replication for both HANA Scale-Up and HANA Scale-Out environments, allowing for automated failover with 3 and more replicates. For more details, please refer to Multitarget System Replication . 5.3. Red Hat Enterprise Linux 9.4 for SAP Solutions HA solutions for SAP Enabling the SAP HANA srServiceStateChanged() hook for hdbindexserver process failure action Starting with version 0.162.3, the resource-agents-sap-hana package provides a new SAP HANA hook script for dealing with situations where the HANA hdbindexserver process has crashed or is hanging: The ChkSrv.py hook script uses the SAP HANA srServiceStateChanged() hook to process HANA events and allow the HA cluster to react to dying or hanging SAP HANA hdbindexserver processes. The CHkSrv.py hook script provides the option to choose the reaction to a crashed or hanging HANA hdbindexerver process: either stop or kill the HANA DB, or only log events for monitoring purposes. All activity related to the srServiceStateChanged() HANA hook is logged in a dedicated SAP HANA tracefile. The minimum required SAP HANA version to enable this feature is SAP HANA 2.0 SPS4. For more details, refer to Enabling the SAP HANA srServiceStateChanged() hook for hdbindexserver process failure action (optional) . In addition to the new feature, version 0.162.3 (and later) of the resource-agents-sap-hana package also provides the following enhancements: Avoids explicit and implicit usage of the /tmp file system to keep the SAPHanaSR resource agents working even in situations where the /tmp file system is full. If the SAPHanaSR.py hook script successfully reports a srConnectionChanged() event to the cluster, a still existing fallback state file is removed to prevent an override of an already reported SR state. Improves supportability as it provides the current process ID of the resource agent, logged in resource agent output, and HANA tracefiles. Improves the logging of status and actions that the resource agents perform. RHEL System Roles for SAP The following enhancements have been made for the roles given below: collection : Ensures Ansible 2.16.1, 2.15.8, 2.14.12 (cve-2023-5764) compatibility. collection : Minimum Ansible version is now 2.14. preconfigure : Includes SLES related code. Configuring SLES managed nodes is nevertheless unsupported by Red Hat. sap_hana_preconfigure : Implements SAP HANA requirements for RHEL 8.8 and RHEL 9.2 and is less restrictive with RHEL versions that are not yet supported for SAP HANA. sap_ha_pacemaker_cluster : Improves VIP resource and constraint setup per platform. For more details refer to Red Hat Enterprise Linux System Roles for SAP . Security You can now learn about processes and practices for securing Red Hat Enterprise Linux systems against local and remote intrusion, exploitation, and malicious activity. These approaches and tools can create a more secure environment for running SAP HANA. For more details, refer to: Security hardening guide for SAP HANA Configuring fapolicyd to allow only SAP HANA executables Using SELinux for SAP HANA 5.4. Red Hat Enterprise Linux 9.5 for SAP Solutions HA solutions for SAP Adding SAP HANA indexserver crash restart handling, detecting SAP HANA indexserver failure in scale-out (in addition to scale-up) DBMS clusters, and automated switching over to the secondary SAP HANA node is now possible. For more information, refer to Additional hooks . Adding support for RHEL High Availability on Azure Government Cloud to enable RHEL HA to run SAP and other workloads is now possible. This allows you to use RHEL HA in an environment that meets the compliance and security standards mandated by the US government for sensitive data. For more information, refer to Support Policies for RHEL High Availability Clusters - Microsoft Azure Virtual Machines as Cluster Members . RHEL System Roles for SAP The following enhancement have been made for the role given below: sap_netweaver_preconfigure : Syncs with SAP note 3119751 v.13 for RHEL.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/9/html/9.x_release_notes/new_features_9.x_release_notes
Chapter 2. Installing and configuring Ceph for OpenStack
Chapter 2. Installing and configuring Ceph for OpenStack As a storage administrator, you must install and configure Ceph before the Red Hat OpenStack Platform can use the Ceph block devices. 2.1. Prerequisites A new or existing Red Hat Ceph Storage cluster. 2.2. Creating Ceph pools for Openstack Creating Ceph pools for use with OpenStack. By default, Ceph block devices use the rbd pool, but you can use any available pool. Prerequisites A running Red Hat Ceph Storage cluster. Procedure Verify the Red Hat Ceph Storage cluster is running, and is in a HEALTH_OK state: Create the Ceph pools: Example In the above example, 128 is the number of placement groups. Important Red Hat recommends using the Ceph Placement Group's per Pool Calculator to calculate a suitable number of placement groups for the pools. Additional Resources See the Pools chapter in the Storage Strategies guide for more details on creating pools. 2.3. Installing the Ceph client on Openstack Install the Ceph client packages on the Red Hat OpenStack Platform to access the Ceph storage cluster. Prerequisites A running Red Hat Ceph Storage cluster. Access to the Ceph software repository. Root-level access to the OpenStack Nova, Cinder, Cinder Backup and Glance nodes. Procedure On the OpenStack Nova, Cinder, Cinder Backup nodes install the following packages: On the OpenStack Glance node install the python-rbd package: 2.4. Copying the Ceph configuration file to Openstack Copying the Ceph configuration file to the nova-compute , cinder-backup , cinder-volume , and glance-api nodes. Prerequisites A running Red Hat Ceph Storage cluster. Access to the Ceph software repository. Root-level access to the OpenStack Nova, Cinder, and Glance nodes. Procedure Copy the Ceph configuration file from the Ceph Monitor node to the OpenStack Nova, Cinder, Cinder Backup and Glance nodes: 2.5. Configuring Ceph client authentication Configure authentication for the Ceph client to access the Red Hat OpenStack Platform. Prerequisites Root-level access to the Ceph Monitor node. A running Red Hat Ceph Storage cluster. Procedure From a Ceph Monitor node, create new users for Cinder, Cinder Backup and Glance: Add the keyrings for client.cinder , client.cinder-backup and client.glance to the appropriate nodes and change their ownership: OpenStack Nova nodes need the keyring file for the nova-compute process: The OpenStack Nova nodes also need to store the secret key of the client.cinder user in libvirt . The libvirt process needs the secret key to access the cluster while attaching a block device from Cinder. Create a temporary copy of the secret key on the OpenStack Nova nodes: If the storage cluster contains Ceph block device images that use the exclusive-lock feature, ensure that all Ceph block device users have permissions to blacklist clients: Return to the OpenStack Nova node: Generate a UUID for the secret, and save the UUID of the secret for configuring nova-compute later: Note You do not necessarily need the UUID on all the Nova compute nodes. However, from a platform consistency perspective, it's better to keep the same UUID. On the OpenStack Nova nodes, add the secret key to libvirt and remove the temporary copy of the key: Set and define the secret for libvirt : Additional Resources See the Managing Ceph users section in the Red Hat Ceph Storage Administration Guide for more details.
[ "ceph -s", "ceph osd pool create volumes 128 ceph osd pool create backups 128 ceph osd pool create images 128 ceph osd pool create vms 128", "yum install python-rbd yum install ceph-common", "yum install python-rbd", "scp /etc/ceph/ceph.conf OPENSTACK_NODES :/etc/ceph", "ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images' ceph auth get-or-create client.cinder-backup mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=backups' ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'", "ceph auth get-or-create client.cinder | ssh CINDER_VOLUME_NODE sudo tee /etc/ceph/ceph.client.cinder.keyring ssh CINDER_VOLUME_NODE chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring ceph auth get-or-create client.cinder-backup | ssh CINDER_BACKUP_NODE tee /etc/ceph/ceph.client.cinder-backup.keyring ssh CINDER_BACKUP_NODE chown cinder:cinder /etc/ceph/ceph.client.cinder-backup.keyring ceph auth get-or-create client.glance | ssh GLANCE_API_NODE sudo tee /etc/ceph/ceph.client.glance.keyring ssh GLANCE_API_NODE chown glance:glance /etc/ceph/ceph.client.glance.keyring", "ceph auth get-or-create client.cinder | ssh NOVA_NODE tee /etc/ceph/ceph.client.cinder.keyring", "ceph auth get-key client.cinder | ssh NOVA_NODE tee client.cinder.key", "ceph auth caps client. ID mon 'allow r, allow command \"osd blacklist\"' osd ' EXISTING_OSD_USER_CAPS '", "ssh NOVA_NODE", "uuidgen > uuid-secret.txt", "cat > secret.xml <<EOF <secret ephemeral='no' private='no'> <uuid>`cat uuid-secret.txt`</uuid> <usage type='ceph'> <name>client.cinder secret</name> </usage> </secret> EOF", "virsh secret-define --file secret.xml virsh secret-set-value --secret USD(cat uuid-secret.txt) --base64 USD(cat client.cinder.key) && rm client.cinder.key secret.xml" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/block_device_to_openstack_guide/installing-and-configuring-ceph-for-openstack
Chapter 16. Managing GPU devices in virtual machines
Chapter 16. Managing GPU devices in virtual machines To enhance the graphical performance of your virtual machine (VMs) on a RHEL 9 host, you can assign a host GPU to a VM. You can detach the GPU from the host and pass full control of the GPU directly to the VM. You can create multiple mediated devices from a physical GPU, and assign these devices as virtual GPUs (vGPUs) to multiple guests. This is currently only supported on selected NVIDIA GPUs, and only one mediated device can be assigned to a single guest. Important GPU assignment is currently only supported on Intel 64 and AMD64 systems. 16.1. Assigning a GPU to a virtual machine To access and control GPUs that are attached to the host system, you must configure the host system to pass direct control of the GPU to the virtual machine (VM). Note If you are looking for information about assigning a virtual GPU, see Managing NVIDIA vGPU devices . Prerequisites You must enable IOMMU support on the host machine kernel. On an Intel host, you must enable VT-d: Regenerate the GRUB configuration with the intel_iommu=on and iommu=pt parameters: Reboot the host. On an AMD host, you must enable AMD-Vi. Note that on AMD hosts, IOMMU is enabled by default, you can add iommu=pt to switch it to pass-through mode: Regenerate the GRUB configuration with the iommu=pt parameter: Note The pt option only enables IOMMU for devices used in pass-through mode and provides better host performance. However, not all hardware supports the option. You can still assign devices even when this option is not enabled. Reboot the host. Procedure Prevent the driver from binding to the GPU. Identify the PCI bus address to which the GPU is attached. Prevent the host's graphics driver from using the GPU. To do so, use the GPU PCI ID with the pci-stub driver. For example, the following command prevents the driver from binding to the GPU attached at the 10de:11fa bus: Reboot the host. Optional: If certain GPU functions, such as audio, cannot be passed through to the VM due to support limitations, you can modify the driver bindings of the endpoints within an IOMMU group to pass through only the necessary GPU functions. Convert the GPU settings to XML and note the PCI address of the endpoints that you want to prevent from attaching to the host drivers. To do so, convert the GPU's PCI bus address to a libvirt-compatible format by adding the pci_ prefix to the address, and converting the delimiters to underscores. For example, the following command displays the XML configuration of the GPU attached at the 0000:02:00.0 bus address. <device> <name>pci_0000_02_00_0</name> <path>/sys/devices/pci0000:00/0000:00:03.0/0000:02:00.0</path> <parent>pci_0000_00_03_0</parent> <driver> <name>pci-stub</name> </driver> <capability type='pci'> <domain>0</domain> <bus>2</bus> <slot>0</slot> <function>0</function> <product id='0x11fa'>GK106GL [Quadro K4000]</product> <vendor id='0x10de'>NVIDIA Corporation</vendor> <iommuGroup number='13'> <address domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> <address domain='0x0000' bus='0x02' slot='0x00' function='0x1'/> </iommuGroup> <pci-express> <link validity='cap' port='0' speed='8' width='16'/> <link validity='sta' speed='2.5' width='16'/> </pci-express> </capability> </device> Prevent the endpoints from attaching to the host driver. In this example, to assign the GPU to a VM, prevent the endpoints that correspond to the audio function, <address domain='0x0000' bus='0x02' slot='0x00' function='0x1'/> , from attaching to the host audio driver, and instead attach the endpoints to VFIO-PCI. Attach the GPU to the VM Create an XML configuration file for the GPU by using the PCI bus address. For example, you can create the following XML file, GPU-Assign.xml, by using parameters from the GPU's bus address. <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </source> </hostdev> Save the file on the host system. Merge the file with the VM's XML configuration. For example, the following command merges the GPU XML file, GPU-Assign.xml, with the XML configuration file of the System1 VM. Note The GPU is attached as a secondary graphics device to the VM. Assigning a GPU as the primary graphics device is not supported, and Red Hat does not recommend removing the primary emulated graphics device in the VM's XML configuration. Verification The device appears under the <devices> section in VM's XML configuration. For more information, see Sample virtual machine XML configuration . Known Issues The number of GPUs that can be attached to a VM is limited by the maximum number of assigned PCI devices, which in RHEL 9 is currently 64. However, attaching multiple GPUs to a VM is likely to cause problems with memory-mapped I/O (MMIO) on the guest, which may result in the GPUs not being available to the VM. To work around these problems, set a larger 64-bit MMIO space and configure the vCPU physical address bits to make the extended 64-bit MMIO space addressable. Attaching an NVIDIA GPU device to a VM that uses a RHEL 9 guest operating system currently disables the Wayland session on that VM, and loads an Xorg session instead. This is because of incompatibilities between NVIDIA drivers and Wayland. 16.2. Managing NVIDIA vGPU devices The vGPU feature makes it possible to divide a physical NVIDIA GPU device into multiple virtual devices, referred to as mediated devices . These mediated devices can then be assigned to multiple virtual machines (VMs) as virtual GPUs. As a result, these VMs can share the performance of a single physical GPU. Important Assigning a physical GPU to VMs, with or without using mediated devices, makes it impossible for the host to use the GPU. 16.2.1. Setting up NVIDIA vGPU devices To set up the NVIDIA vGPU feature, you need to download NVIDIA vGPU drivers for your GPU device, create mediated devices, and assign them to the intended virtual machines. For detailed instructions, see below. Prerequisites Your GPU supports vGPU mediated devices. For an up-to-date list of NVIDIA GPUs that support creating vGPUs, see the NVIDIA vGPU software documentation . If you do not know which GPU your host is using, install the lshw package and use the lshw -C display command. The following example shows the system is using an NVIDIA Tesla P4 GPU, compatible with vGPU. Procedure Download the NVIDIA vGPU drivers and install them on your system. For instructions, see the NVIDIA documentation . If the NVIDIA software installer did not create the /etc/modprobe.d/nvidia-installer-disable-nouveau.conf file, create a conf file of any name in /etc/modprobe.d/ , and add the following lines in the file: Regenerate the initial ramdisk for the current kernel, then reboot. Check that the kernel has loaded the nvidia_vgpu_vfio module and that the nvidia-vgpu-mgr.service service is running. In addition, if creating vGPU based on an NVIDIA Ampere GPU device, ensure that virtual functions are enable for the physical GPU. For instructions, see the NVIDIA documentation . Generate a device UUID. Prepare an XML file with a configuration of the mediated device, based on the detected GPU hardware. For example, the following configures a mediated device of the nvidia-63 vGPU type on an NVIDIA Tesla P4 card that runs on the 0000:01:00.0 PCI bus and uses the UUID generated in the step. <device> <parent>pci_0000_01_00_0</parent> <capability type="mdev"> <type id="nvidia-63"/> <uuid>30820a6f-b1a5-4503-91ca-0c10ba58692a</uuid> </capability> </device> Define a vGPU mediated device based on the XML file you prepared. For example: Optional: Verify that the mediated device is listed as inactive. Start the vGPU mediated device you created. Optional: Ensure that the mediated device is listed as active. Set the vGPU device to start automatically after the host reboots Attach the mediated device to a VM that you want to share the vGPU resources. To do so, add the following lines, along with the previously genereated UUID, to the <devices/> sections in the XML configuration of the VM. <hostdev mode='subsystem' type='mdev' managed='no' model='vfio-pci' display='on'> <source> <address uuid='30820a6f-b1a5-4503-91ca-0c10ba58692a'/> </source> </hostdev> Note that each UUID can only be assigned to one VM at a time. In addition, if the VM does not have QEMU video devices, such as virtio-vga , add also the ramfb='on' parameter on the <hostdev> line. For full functionality of the vGPU mediated devices to be available on the assigned VMs, set up NVIDIA vGPU guest software licensing on the VMs. For further information and instructions, see the NVIDIA Virtual GPU Software License Server User Guide . Verification Query the capabilities of the vGPU you created, and ensure it is listed as active and persistent. Start the VM and verify that the guest operating system detects the mediated device as an NVIDIA GPU. For example, if the VM uses Linux: Known Issues Assigning an NVIDIA vGPU mediated device to a VM that uses a RHEL 9 guest operating system currently disables the Wayland session on that VM, and loads an Xorg session instead. This is because of incompatibilities between NVIDIA drivers and Wayland. Additional resources NVIDIA vGPU software documentation The man virsh command 16.2.2. Removing NVIDIA vGPU devices To change the configuration of assigned vGPU mediated devices , you need to remove the existing devices from the assigned VMs. For instructions, see below: Prerequisites The VM from which you want to remove the device is shut down. Procedure Obtain the ID of the mediated device that you want to remove. Stop the running instance of the vGPU mediated device. Optional: Ensure the mediated device has been deactivated. Remove the device from the XML configuration of the VM. To do so, use the virsh edit utility to edit the XML configuration of the VM, and remove the mdev's configuration segment. The segment will look similar to the following: <hostdev mode='subsystem' type='mdev' managed='no' model='vfio-pci'> <source> <address uuid='30820a6f-b1a5-4503-91ca-0c10ba58692a'/> </source> </hostdev> Note that stopping and detaching the mediated device does not delete it, but rather keeps it as defined . As such, you can restart and attach the device to a different VM. Optional: To delete the stopped mediated device, remove its definition. Verification If you only stopped and detached the device, ensure the mediated device is listed as inactive. If you also deleted the device, ensure the following command does not display it. Additional resources The man virsh command 16.2.3. Obtaining NVIDIA vGPU information about your system To evaluate the capabilities of the vGPU features available to you, you can obtain additional information about the mediated devices on your system, such as: How many mediated devices of a given type can be created What mediated devices are already configured on your system. Procedure To see the available GPUs devices on your host that can support vGPU mediated devices, use the virsh nodedev-list --cap mdev_types command. For example, the following shows a system with two NVIDIA Quadro RTX6000 devices. To display vGPU types supported by a specific GPU device, as well as additional metadata, use the virsh nodedev-dumpxml command. Additional resources The man virsh command 16.2.4. Remote desktop streaming services for NVIDIA vGPU The following remote desktop streaming services are supported on the RHEL 9 hypervisor with NVIDIA vGPU or NVIDIA GPU passthrough enabled: HP ZCentral Remote Boost/Teradici NICE DCV Mechdyne TGX For support details, see the appropriate vendor support matrix. 16.2.5. Additional resources NVIDIA vGPU software documentation
[ "grubby --args=\"intel_iommu=on iommu_pt\" --update-kernel DEFAULT", "grubby --args=\"iommu=pt\" --update-kernel DEFAULT", "lspci -Dnn | grep VGA 0000:02:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK106GL [Quadro K4000] [ 10de:11fa ] (rev a1)", "grubby --args=\"pci-stub.ids=10de:11fa\" --update-kernel DEFAULT", "virsh nodedev-dumpxml pci_0000_02_00_0", "<device> <name>pci_0000_02_00_0</name> <path>/sys/devices/pci0000:00/0000:00:03.0/0000:02:00.0</path> <parent>pci_0000_00_03_0</parent> <driver> <name>pci-stub</name> </driver> <capability type='pci'> <domain>0</domain> <bus>2</bus> <slot>0</slot> <function>0</function> <product id='0x11fa'>GK106GL [Quadro K4000]</product> <vendor id='0x10de'>NVIDIA Corporation</vendor> <iommuGroup number='13'> <address domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> <address domain='0x0000' bus='0x02' slot='0x00' function='0x1'/> </iommuGroup> <pci-express> <link validity='cap' port='0' speed='8' width='16'/> <link validity='sta' speed='2.5' width='16'/> </pci-express> </capability> </device>", "driverctl set-override 0000:02:00.1 vfio-pci", "<hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </source> </hostdev>", "virsh attach-device System1 --file /home/GPU-Assign.xml --persistent Device attached successfully.", "lshw -C display *-display description: 3D controller product: GP104GL [Tesla P4] vendor: NVIDIA Corporation physical id: 0 bus info: pci@0000:01:00.0 version: a1 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress cap_list configuration: driver=vfio-pci latency=0 resources: irq:16 memory:f6000000-f6ffffff memory:e0000000-efffffff memory:f0000000-f1ffffff", "blacklist nouveau options nouveau modeset=0", "dracut --force reboot", "lsmod | grep nvidia_vgpu_vfio nvidia_vgpu_vfio 45011 0 nvidia 14333621 10 nvidia_vgpu_vfio mdev 20414 2 vfio_mdev,nvidia_vgpu_vfio vfio 32695 3 vfio_mdev,nvidia_vgpu_vfio,vfio_iommu_type1 systemctl status nvidia-vgpu-mgr.service nvidia-vgpu-mgr.service - NVIDIA vGPU Manager Daemon Loaded: loaded (/usr/lib/systemd/system/nvidia-vgpu-mgr.service; enabled; vendor preset: disabled) Active: active (running) since Fri 2018-03-16 10:17:36 CET; 5h 8min ago Main PID: 1553 (nvidia-vgpu-mgr) [...]", "uuidgen 30820a6f-b1a5-4503-91ca-0c10ba58692a", "<device> <parent>pci_0000_01_00_0</parent> <capability type=\"mdev\"> <type id=\"nvidia-63\"/> <uuid>30820a6f-b1a5-4503-91ca-0c10ba58692a</uuid> </capability> </device>", "virsh nodedev-define vgpu-test.xml Node device mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0 created from vgpu-test.xml", "virsh nodedev-list --cap mdev --inactive mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0", "virsh nodedev-start mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0 Device mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0 started", "virsh nodedev-list --cap mdev mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0", "virsh nodedev-autostart mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0 Device mdev_d196754e_d8ed_4f43_bf22_684ed698b08b_0000_9b_00_0 marked as autostarted", "<hostdev mode='subsystem' type='mdev' managed='no' model='vfio-pci' display='on'> <source> <address uuid='30820a6f-b1a5-4503-91ca-0c10ba58692a'/> </source> </hostdev>", "virsh nodedev-info mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0 Name: virsh nodedev-autostart mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0 Parent: pci_0000_01_00_0 Active: yes Persistent: yes Autostart: yes", "lspci -d 10de: -k 07:00.0 VGA compatible controller: NVIDIA Corporation GV100GL [Tesla V100 SXM2 32GB] (rev a1) Subsystem: NVIDIA Corporation Device 12ce Kernel driver in use: nvidia Kernel modules: nouveau, nvidia_drm, nvidia", "virsh nodedev-list --cap mdev mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0", "virsh nodedev-destroy mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0 Destroyed node device 'mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0'", "virsh nodedev-info mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0 Name: virsh nodedev-autostart mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0 Parent: pci_0000_01_00_0 Active: no Persistent: yes Autostart: yes", "<hostdev mode='subsystem' type='mdev' managed='no' model='vfio-pci'> <source> <address uuid='30820a6f-b1a5-4503-91ca-0c10ba58692a'/> </source> </hostdev>", "virsh nodedev-undefine mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0 Undefined node device 'mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0'", "virsh nodedev-list --cap mdev --inactive mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0", "virsh nodedev-list --cap mdev", "virsh nodedev-list --cap mdev_types pci_0000_5b_00_0 pci_0000_9b_00_0", "virsh nodedev-dumpxml pci_0000_9b_00_0 <device> <name>pci_0000_9b_00_0</name> <path>/sys/devices/pci0000:9a/0000:9a:00.0/0000:9b:00.0</path> <parent>pci_0000_9a_00_0</parent> <driver> <name>nvidia</name> </driver> <capability type='pci'> <class>0x030000</class> <domain>0</domain> <bus>155</bus> <slot>0</slot> <function>0</function> <product id='0x1e30'>TU102GL [Quadro RTX 6000/8000]</product> <vendor id='0x10de'>NVIDIA Corporation</vendor> <capability type='mdev_types'> <type id='nvidia-346'> <name>GRID RTX6000-12C</name> <deviceAPI>vfio-pci</deviceAPI> <availableInstances>2</availableInstances> </type> <type id='nvidia-439'> <name>GRID RTX6000-3A</name> <deviceAPI>vfio-pci</deviceAPI> <availableInstances>8</availableInstances> </type> [...] <type id='nvidia-440'> <name>GRID RTX6000-4A</name> <deviceAPI>vfio-pci</deviceAPI> <availableInstances>6</availableInstances> </type> <type id='nvidia-261'> <name>GRID RTX6000-8Q</name> <deviceAPI>vfio-pci</deviceAPI> <availableInstances>3</availableInstances> </type> </capability> <iommuGroup number='216'> <address domain='0x0000' bus='0x9b' slot='0x00' function='0x3'/> <address domain='0x0000' bus='0x9b' slot='0x00' function='0x1'/> <address domain='0x0000' bus='0x9b' slot='0x00' function='0x2'/> <address domain='0x0000' bus='0x9b' slot='0x00' function='0x0'/> </iommuGroup> <numa node='2'/> <pci-express> <link validity='cap' port='0' speed='8' width='16'/> <link validity='sta' speed='2.5' width='8'/> </pci-express> </capability> </device>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_virtualization/assembly_managing-gpu-devices-in-virtual-machines_configuring-and-managing-virtualization
Chapter 9. Detach volumes after non-graceful node shutdown
Chapter 9. Detach volumes after non-graceful node shutdown This feature allows drivers to automatically detach volumes when a node goes down non-gracefully. 9.1. Overview A graceful node shutdown occurs when the kubelet's node shutdown manager detects the upcoming node shutdown action. Non-graceful shutdowns occur when the kubelet does not detect a node shutdown action, which can occur because of system or hardware failures. Also, the kubelet may not detect a node shutdown action when the shutdown command does not trigger the Inhibitor Locks mechanism used by the kubelet on Linux, or because of a user error, for example, if the shutdownGracePeriod and shutdownGracePeriodCriticalPods details are not configured correctly for that node. With this feature, when a non-graceful node shutdown occurs, you can manually add an out-of-service taint on the node to allow volumes to automatically detach from the node. 9.2. Adding an out-of-service taint manually for automatic volume detachment Prerequisites Access to the cluster with cluster-admin privileges. Procedure To allow volumes to detach automatically from a node after a non-graceful node shutdown: After a node is detected as unhealthy, shut down the worker node. Ensure that the node is shutdown by running the following command and checking the status: oc get node <node name> 1 1 <node name> = name of the non-gracefully shutdown node Important If the node is not completely shut down, do not proceed with tainting the node. If the node is still up and the taint is applied, filesystem corruption can occur. Taint the corresponding node object by running the following command: Important Tainting a node this way deletes all pods on that node. This also causes any pods that are backed by statefulsets to be evicted, and replacement pods to be created on a different node. oc adm taint node <node name> node.kubernetes.io/out-of-service=nodeshutdown:NoExecute 1 1 <node name> = name of the non-gracefully shutdown node After the taint is applied, the volumes detach from the shutdown node allowing their disks to be attached to a different node. Example The resulting YAML file resembles the following: spec: taints: - effect: NoExecute key: node.kubernetes.io/out-of-service value: nodeshutdown Restart the node. Remove the taint from the corresponding node object by running the following command: oc adm taint node <node name> node.kubernetes.io/out-of-service=nodeshutdown:NoExecute- 1
[ "get node <node name> 1", "adm taint node <node name> node.kubernetes.io/out-of-service=nodeshutdown:NoExecute 1", "spec: taints: - effect: NoExecute key: node.kubernetes.io/out-of-service value: nodeshutdown", "adm taint node <node name> node.kubernetes.io/out-of-service=nodeshutdown:NoExecute- 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/storage/ephemeral-storage-csi-vol-detach-non-graceful-shutdown
Chapter 55. Interceptors in the Apache CXF Runtime
Chapter 55. Interceptors in the Apache CXF Runtime Abstract Most of the functionality in the Apache CXF runtime is implemented by interceptors. Every endpoint created by the Apache CXF runtime has three potential interceptor chains for processing messages. The interceptors in the these chains are responsible for transforming messages between the raw data transported across the wire and the Java objects handled by the endpoint's implementation code. The interceptors are organized into phases to ensure that processing happens on the proper order. Overview A large part of what Apache CXF does entails processing messages. When a consumer makes a invocation on a remote service the runtime needs to marshal the data into a message the service can consume and place it on the wire. The service provider must unmarshal the message, execute its business logic, and marshal the response into the appropriate message format. The consumer must then unmarshal the response message, correlate it to the proper request, and pass it back to the consumer's application code. In addition to the basic marshaling and unmarshaling, the Apache CXF runtime may do a number of other things with the message data. For example, if WS-RM is activated, the runtime must process the message chunks and acknowledgement messages before marshaling and unmarshaling the message. If security is activated, the runtime must validate the message's credentials as part of the message processing sequence. Figure 55.1, "Apache CXF interceptor chains" shows the basic path that a request message takes when it is received by a service provider. Figure 55.1. Apache CXF interceptor chains Message processing in Apache CXF When a Apache CXF developed consumer invokes a remote service the following message processing sequence is started: The Apache CXF runtime creates an outbound interceptor chain to process the request. If the invocation starts a two-way message exchange, the runtime creates an inbound interceptor chain and a fault processing interceptor chain. The request message is passed sequentially through the outbound interceptor chain. Each interceptor in the chain performs some processing on the message. For example, the Apache CXF supplied SOAP interceptors package the message in a SOAP envelope. If any of the interceptors on the outbound chain create an error condition the chain is unwound and control is returned to the application level code. An interceptor chain is unwound by calling the fault processing method on all of the previously invoked interceptors. The request is dispatched to the appropriate service provider. When the response is received, it is passed sequentially through the inbound interceptor chain. Note If the response is an error message, it is passed into the fault processing interceptor chain. If any of the interceptors on the inbound chain create an error condition, the chain is unwound. When the message reaches the end of the inbound interceptor chain, it is passed back to the application code. When a Apache CXF developed service provider receives a request from a consumer, a similar process takes place: The Apache CXF runtime creates an inbound interceptor chain to process the request message. If the request is part of a two-way message exchange, the runtime also creates an outbound interceptor chain and a fault processing interceptor chain. The request is passed sequentially through the inbound interceptor chain. If any of the interceptors on the inbound chain create an error condition, the chain is unwound and a fault is dispatched to the consumer. An interceptor chain is unwound by calling the fault processing method on all of the previously invoked interceptors. When the request reaches the end of the inbound interceptor chain, it is passed to the service implementation. When the response is ready it is passed sequentially through the outbound interceptor chain. Note If the response is an exception, it is passed through the fault processing interceptor chain. If any of the interceptors on the outbound chain create an error condition, the chain is unwound and a fault message is dispatched. Once the request reaches the end of the outbound chain, it is dispatched to the consumer. Interceptors All of the message processing in the Apache CXF runtime is done by interceptors . Interceptors are POJOs that have access to the message data before it is passed to the application layer. They can do a number of things including: transforming the message, stripping headers off of the message, or validating the message data. For example, an interceptor could read the security headers off of a message, validate the credentials against an external security service, and decide if message processing can continue. The message data available to an interceptor is determined by several factors: the interceptor's chain the interceptor's phase the other interceptors that occur earlier in the chain Phases Interceptors are organized into phases . A phase is a logical grouping of interceptors with common functionality. Each phase is responsible for a specific type of message processing. For example, interceptors that process the marshaled Java objects that are passed to the application layer would all occur in the same phase. Interceptor chains Phases are aggregated into interceptor chains . An interceptor chain is a list of interceptor phases that are ordered based on whether messages are inbound or outbound. Each endpoint created using Apache CXF has three interceptor chains: a chain for inbound messages a chain for outbound messages a chain for error messages Interceptor chains are primarily constructed based on the choose of binding and transport used by the endpoint. Adding other runtime features, such as security or logging, also add interceptors to the chains. Developers can also add custom interceptors to a chain using configuration. Developing interceptors Developing an interceptor, regardless of its functionality, always follows the same basic procedure: Chapter 56, The Interceptor APIs Apache CXF provides a number of abstract interceptors to make it easier to develop custom interceptors. Section 57.2, "Specifying an interceptor's phase" Interceptors require certain parts of a message to be available and require the data to be in a certain format. The contents of the message and the format of the data is partially determined by an interceptor's phase. Section 57.3, "Constraining an interceptors placement in a phase" In general, the ordering of interceptors within a phase is not important. However, in certain situations it may be important to ensure that an interceptor is executed before, or after, other interceptors in the same phase. Section 58.2, "Processing messages" Section 58.3, "Unwinding after an error" If an error occurs in the active interceptor chain after the interceptor has executed, its fault processing logic is invoked. Chapter 59, Configuring Endpoints to Use Interceptors
null
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_development_guide/cxfinterceptorintro
Chapter 2. Forking the RHTAP catalog repository
Chapter 2. Forking the RHTAP catalog repository Once developers start using your instance of RHTAP, you might want to customize your instance, to better suit their needs. One aspect of RHTAP that you can customize is the set of software templates that it provides. These templates help developers quickly build applications. Forking our catalog repository, which contains the default set of software templates, enables you to customize the templates for your instance. Prerequisites: A GitHub account Procedure: In your web browser, navigate to the RHTAP software catalog repository . Beneath the banner of the page, select Fork and fork the repository. Uncheck the box that says "Copy the main branch only". In your new fork, beneath the banner, click main to open a dropdown menu. Under Tags , select the release that corresponds to the version of RHTAP that you are using. For example, if you are using version 1.0.0 of RHTAP, you should use your forked instance of this release . Note Be sure to update your fork from time to time, so updates from the upstream repository can benefit your instance of RHTAP.
null
https://docs.redhat.com/en/documentation/red_hat_trusted_application_pipeline/1.0/html/installing_red_hat_trusted_application_pipeline/forking-the-catalog-repository
Chapter 3. Supported platforms
Chapter 3. Supported platforms You can find the supported platforms and life cycle dates for both current and past versions of Red Hat Developer Hub on the Life Cycle page .
null
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.4/html/about_red_hat_developer_hub/supported-platforms_about-rhdh
Chapter 1. Deploying your Red Hat build of Quarkus applications to OpenShift Container Platform
Chapter 1. Deploying your Red Hat build of Quarkus applications to OpenShift Container Platform As an application developer, you can deploy your Quarkus applications to Red Hat OpenShift Container Platform by using a single Maven command. This functionality is provided by the quarkus-openshift extension, which supports multiple deployment options, including the Docker build strategy and the Source-to-Image (S2I) strategy. Here, you can learn about the preferred workflows to deploy your Quarkus applications to production environments. To learn about other ways to deploy Quarkus applications, see the Deploying on OpenShift guide in the Quarkus community. Prerequisites You have OpenJDK 17 or 21 installed. You have set the JAVA_HOME environment variable to the location of the Java SDK. You have Apache Maven 3.8.6 or later installed. You have a Quarkus Maven project that includes the quarkus-openshift extension. To add the Quarkus OpenShift extension, see Adding the Quarkus OpenShift extension . You have access to an OpenShift Container Platform cluster and the latest compatible version of the oc tool installed. For information about installing the oc tool, see CLI tools . 1.1. OpenShift Container Platform build strategies and Red Hat build of Quarkus Red Hat OpenShift Container Platform is a Kubernetes-based platform for developing and running containerized applications. Although the Kubernetes upstream project provides additional strategies, Red Hat supports only the following strategies in Quarkus: 1.1.1. Overview of OpenShift Container Platform build strategies Docker build This strategy builds the artifacts outside the OpenShift Container Platform cluster, locally or in a CI environment, and provides them to the OpenShift Container Platform build system together with a Dockerfile. The artifacts include JAR files or a native executable. The container gets built inside the OpenShift Container Platform cluster and is provided as an image stream. Note The OpenShift Container Platform Docker build strategy is the preferred build strategy because it supports Quarkus applications targeted for JVM or compiled to native executables. However, for compatibility with earlier Quarkus versions, the default build strategy is S2I. To select the OpenShift Container Platform Docker build strategy, use the quarkus.openshift.build-strategy property. Source to Image (S2I) The build process is performed inside the OpenShift Container Platform cluster. Red Hat build of Quarkus fully supports using S2I to deploy Red Hat build of Quarkus as a JVM application. Binary S2I This strategy uses a JAR file as input to the S2I build process, which speeds up the building and deploying of your application. 1.1.2. Build strategies supported by Quarkus The following table outlines the build strategies that Red Hat build of Quarkus 3.8 supports: Build strategy Support for Red Hat build of Quarkus tools Support for JVM Support for native Support for JVM Serverless Support for native Serverless Docker build YES YES YES YES YES S2I Binary YES YES NO NO NO Source S2I NO YES NO NO NO Additional resources Using S2I to deploy Quarkus applications to OpenShift Container Platform Deploying Quarkus Java applications to OpenShift Container Platform Deploying Quarkus applications compiled to native executables 1.2. Adding the Red Hat build of Quarkus OpenShift extension To build and deploy your applications as a container image that runs inside your OpenShift Container Platform cluster, you must add the Red Hat build of Quarkus OpenShift extension quarkus-openshift as a dependency to your project. The Quarkus OpenShift extension also generates OpenShift Container Platform resources such as image streams, build configuration, deployment, and service definitions. If your Quarkus application includes the quarkus-smallrye-health extension, OpenShift Container Platform can access the health endpoint and verify the startup, liveness, and readiness of your application. Important From Red Hat build of Quarkus 3.8, the DeploymentConfig object, deprecated in OpenShift, is also deprecated in Red Hat build of Quarkus. Deployment is the default and preferred deployment kind for the Quarkus OpenShift extension. If you redeploy applications that you deployed before by using DeploymentConfig , by default, those applications use Deployment but do not remove the DeploymentConfig . This leads to a deployment of both new and old applications, so, you must remove the old DeploymentConfig manually. However, if you want to continue to use DeploymentConfig , it is still possible to do so by explicitly setting quarkus.openshift.deployment-kind to DeploymentConfig . Prerequisites You have a Quarkus Maven project. For information about how to create a Quarkus project with Maven, see Developing and compiling your Red Hat build of Quarkus applications with Apache Maven . Procedure To add the quarkus-openshift extension to your project, use one of the following methods: Configure the pom.xml file: pom.xml <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-openshift</artifactId> </dependency> Enter the following command on the OpenShift Container Platform CLI: ./mvnw quarkus:add-extension -Dextensions="io.quarkus:quarkus-openshift" Enter the following command on the Quarkus CLI: quarkus extension add 'quarkus-openshift' 1.3. Switching to the required OpenShift Container Platform project You can use the Red Hat OpenShift Container Platform command-line interface (CLI) to create applications and manage your OpenShift Container Platform projects. Use the information provided to create an OpenShift Container Platform project or to switch to an existing one. Prerequisites You have access to an OpenShift Container Platform cluster and the latest compatible version of the oc tool installed. For information about installing the oc tool, see CLI tools . Procedure Log in to the oc tool: oc login To show the current project space, enter the following command: oc project -q Use one of the following steps to go to the required OpenShift Container Platform project: If the project already exists, switch to the project: oc project <project_name> If the project does not exist, create a new project: oc new-project <project_name> Additional resources Getting started with the OpenShift CLI 1.4. Deploying Red Hat build of Quarkus Java applications to OpenShift Container Platform The Red Hat build of Quarkus OpenShift extension enables you to deploy your Quarkus application to OpenShift Container Platform by using the Docker build strategy. The container gets built inside the OpenShift Container Platform cluster and is provided as an image stream. Your Quarkus project includes pregenerated Dockerfiles with instructions. When you want to use a custom Dockerfile, you must add the file in the src/main/docker directory or anywhere inside the module. Additionally, you must set the path to your Dockerfile by using the quarkus.openshift.jvm-dockerfile property. Prerequisites You have a Red Hat build of Quarkus Maven project that includes the quarkus-openshift extension. You are working in the correct OpenShift project namespace, as outlined in Switching to the required OpenShift Container Platform project . Procedure Set the Docker build strategy in your application.properties configuration file: quarkus.openshift.build-strategy=docker Optional: Set the following properties in the application.properties file, as required by your environment: If you are using an untrusted certificate, configure the KubernetesClient : quarkus.kubernetes-client.trust-certs=true Expose the service to create an OpenShift Container Platform route: quarkus.openshift.route.expose=true Set the path to your custom Dockerfile: quarkus.openshift.jvm-dockerfile=<path_to_your_dockerfile> The following example shows the path to the Dockerfile.custom-jvm : quarkus.openshift.jvm-dockerfile=src/main/resources/Dockerfile.custom-jvm Package and deploy your Quarkus application to the current OpenShift project: ./mvnw clean package -Dquarkus.openshift.deploy=true Verification The verification steps and related terminal outputs are demonstrated on the openshift-helloworld example application. Display the list of pods associated with your current OpenShift project: oc get pods NAME READY STATUS RESTARTS AGE openshift-helloworld-1-build 0/1 Completed 0 11m openshift-helloworld-1-deploy 0/1 Completed 0 10m openshift-helloworld-1-gzzrx 1/1 Running 0 10m To retrieve the log output for your application's pod, use the oc logs -f command with the <pod_name> value of the pod you are interested in. In this example, we use the openshift-helloworld-1-gzzrx pod name that corresponds with the latest pod prefixed with the name of your application: oc logs -f openshift-helloworld-1-gzzrx Starting the Java application using /opt/jboss/container/java/run/run-java.sh ... INFO exec -a "java" java -Dquarkus.http.host=0.0.0.0 -Djava.util.logging.manager=org.jboss.logmanager.LogManager -XX:MaxRAMPercentage=50.0 -XX:+UseParallelGC -XX:MinHeapFreeRatio=10 -XX:MaxHeapFreeRatio=20 -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90 -XX:+ExitOnOutOfMemoryError -cp "." -jar /deployments/quarkus-run.jar __ ____ __ _____ ___ __ ____ ______ --/ __ \/ / / / _ | / _ \/ //_/ / / / __/ -/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\ \ --\___\_\____/_/ |_/_/|_/_/|_|\____/___/ 2024-06-27 17:13:25,254 INFO [io.quarkus] (main) getting-started 1.0.0-SNAPSHOT on JVM (powered by Quarkus 3.8.6.SP3-redhat-00002) started in 0.653s. Listening on: http://0.0.0.0:8080 2024-06-27 17:13:25,281 INFO [io.quarkus] (main) Profile prod activated. 2024-06-27 17:13:25,281 INFO [io.quarkus] (main) Installed features: [cdi, kubernetes, resteasy-reactive, smallrye-context-propagation, vertx] Retrieve a list of services: oc get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE openshift-helloworld ClusterIP 172.30.64.57 <none> 80/TCP 14m Get a URL to test your application. Note To create an OpenShift Container Platform route, ensure you have specified quarkus.openshift.route.expose=true in the application.properties file. oc get routes NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD openshift-helloworld openshift-helloworld-username-dev.apps.sandbox-m2.ll9k.p1.openshiftapps.com openshift-helloworld http None Note Be aware that the route is now listening on port 80 and no longer at port 8080. You can test the application demonstrated in this example with a web browser or a terminal by using curl and the complete URL output from oc get routes : http://openshift-helloworld-username-dev.apps.sandbox-m2.ll9k.p1.openshiftapps.com. For example: curl http://openshift-helloworld-username-dev.apps.sandbox-m2.ll9k.p1.openshiftapps.com . 1.5. Deploying Red Hat build of Quarkus applications compiled to native executables You can deploy your native Red Hat build of Quarkus application to OpenShift Container Platform by using the Docker build strategy. You must create a native executable for your application that targets the Linux AMD64 operating system. If your host operating system is different from this, create a native Linux executable using a container runtime, for example, Docker or Podman. Your Quarkus project includes pregenerated Dockerfiles with instructions. To use a custom Dockerfile, add the file in the src/main/docker directory or anywhere inside the module, and set the path to your Dockerfile by using the quarkus.openshift.native-dockerfile property. Prerequisites You have a Linux AMD64 system or an Open Container Initiative (OCI) compatible container runtime, such as Podman or Docker. You have a Quarkus Maven project that includes the quarkus-openshift extension. You are working in the correct OpenShift project namespace, as outlined in Switching to the required OpenShift Container Platform project . Procedure Set the Docker build strategy in your application.properties configuration file: quarkus.openshift.build-strategy=docker Set the container runtime: quarkus.native.container-build=true Optional: Set the following properties in the application.properties file, as required by your environment: If you are using an untrusted certificate, configure the KubernetesClient property: quarkus.kubernetes-client.trust-certs=true Expose the service to create an OpenShift Container Platform route: quarkus.openshift.route.expose=true Set the path to your custom Dockerfile: quarkus.openshift.native-dockerfile=<path_to_your_dockerfile> The following example shows the path to the Dockerfile.custom-native : quarkus.openshift.jvm-dockerfile=src/main/docker/Dockerfile.custom-native Specify the container engine: To build a native executable with Podman: quarkus.native.container-runtime=podman To build a native executable with Docker: quarkus.native.container-runtime=docker Finally, build a native executable, package, and deploy your application to OpenShift Container Platform: ./mvnw clean package -Pnative -Dquarkus.openshift.deploy=true Verification Verify that an image stream and a service resource is created and the Quarkus application is deployed by using the OpenShift web console. Alternatively, you can run the following OpenShift Container Platform command-line interface (CLI) commands: oc get is 1 oc get pods 2 oc get svc 3 1 List the image streams created. 2 View a list of pods associated with your current OpenShift project. 3 Get the list of Kubernetes services. To retrieve the log output for your application's pod, enter the following command where <pod_name> is the name of the latest pod prefixed with the name of your application: oc logs -f <pod_name> Additional resources Managing image streams 1.6. Using S2I to deploy Red Hat build of Quarkus applications to OpenShift Container Platform You can deploy your Red Hat build of Quarkus applications to OpenShift Container Platform by using the Source-to-Image (S2I) method. With S2I, you must provide the source code to the build container through a Git repository or by uploading the source at build time. Important S2I is not supported for native deployments. For deploying Quarkus applications compiled to native executables, use the Docker build strategy . The procedure for deploying your Quarkus applications to OpenShift Container Platform by using S2I differs depending on the Java version you are using. 1.6.1. Using S2I to deploy Red Hat build of Quarkus applications to OpenShift Container Platform with Java 17 You can deploy your Red Hat build of Quarkus applications running on Java 17 to OpenShift Container Platform by using the Source-to-Image (S2I) method. Prerequisites You have a Quarkus application built with Java 17. For Java 17 applications, see Using S2I to deploy Red Hat build of Quarkus applications to OpenShift Container Platform with Java 17 . (Optional): You have a Quarkus Maven project that includes the quarkus-openshift extension. You are working in the correct OpenShift project namespace, as outlined in Switching to the required OpenShift Container Platform project . Your Quarkus Maven project is hosted in a Git repository. Procedure Open the pom.xml file, and change the Java configuration to version 17, as follows: <maven.compiler.source>17</maven.compiler.source> <maven.compiler.target>17</maven.compiler.target> To package your Java 17 application, enter the following command: Create a directory called .s2i at the same level as the pom.xml file. Create a file called environment in the .s2i directory and add the following content: Commit and push your changes to the remote Git repository. To import the supported OpenShift Container Platform image, enter the following command: Note If you are using the OpenShift image registry and pulling from image streams in the same project, your pod service account should already have the correct permissions. If you are pulling images across other OpenShift Container Platform projects or from secured registries, additional configuration steps might be required. For more information, see Using image pull secrets in Red Hat OpenShift Container Platform documentation. To build the project, create the application, and deploy the OpenShift Container Platform service, enter the following command: oc new-app registry.access.redhat.com/ubi8/openjdk-17~<git_path> --name=<project_name> Where: <git_path> is the path to the Git repository that hosts your Quarkus project. For example, oc new-app registry.access.redhat.com/ubi8/openjdk-17~https://github.com/johndoe/code-with-quarkus.git --name=code-with-quarkus . If you do not have SSH keys configured for the Git repository, when specifying the Git path, use the HTTPS URL instead of the SSH URL. <project_name> is the name of your application. To deploy an updated version of the project, push any updates to the Git repository then enter the following command: oc start-build <project_name> To expose a route to the Quarkus application, enter the following command: oc expose svc <project_name> Verification To view a list of pods associated with your current OpenShift project, enter the following command: oc get pods To retrieve the log output for your application's pod, enter the following command where <pod_name> is the name of the latest pod prefixed with the name of your application: oc logs -f <pod_name> Additional resources Red Hat build of OpenJDK applications in containers Route configuration 1.6.2. Using S2I to deploy Red Hat build of Quarkus applications to OpenShift Container Platform with Java 21 You can deploy your Red Hat build of Quarkus applications running on Java 21 to OpenShift Container Platform by using the Source-to-Image (S2I) method. Prerequisites You have a Quarkus application built with Java 21. For Java 21 applications, see Using S2I to deploy Red Hat build of Quarkus applications to OpenShift Container Platform with Java 21 . (Optional): You have a Quarkus Maven project that includes the quarkus-openshift extension. You are working in the correct OpenShift Container Platform project namespace, as outlined in Switching to the required OpenShift Container Platform project . Your Quarkus Maven project is hosted in a Git repository. Procedure Open the pom.xml file, and change the Java configuration to version 21, as follows: <maven.compiler.source>21</maven.compiler.source> <maven.compiler.target>21</maven.compiler.target> To package your Java 21 application, enter the following command: Create a directory called .s2i at the same level as the pom.xml file. Create a file called environment in the .s2i directory and add the following content: Commit and push your changes to the remote Git repository. To import the supported OpenShift Container Platform image, enter the following command: Note If you are using the OpenShift image registry and pulling from image streams in the same project, your pod service account should already have the correct permissions. If you are pulling images across other OpenShift Container Platform projects or from secured registries, additional configuration steps might be required. For more information, see Using image pull secrets in Red Hat OpenShift Container Platform documentation. If you are deploying on IBM Z infrastructure, enter oc import-image ubi8/openjdk-21 --from=registry.redhat.io/ubi8/openjdk-21 --confirm instead. For information about this image, see the Red Hat build of OpenJDK 21 page. To build the project, create the application, and deploy the OpenShift Container Platform service, enter the following command: oc new-app registry.access.redhat.com/ubi8/openjdk-21~<git_path> --name=<project_name> Where: <git_path> is the path to the Git repository that hosts your Quarkus project. For example, oc new-app registry.access.redhat.com/ubi8/openjdk-21~https://github.com/johndoe/code-with-quarkus.git --name=code-with-quarkus . If you do not have SSH keys configured for the Git repository, when specifying the Git path, use the HTTPS URL instead of the SSH URL. <project_name> is the name of your application. Note If you are deploying on IBM Z infrastructure, enter oc new-app ubi8/openjdk-21~<git_path> --name=<project_name> instead. To deploy an updated version of the project, push any updates to the Git repository then enter the following command: oc start-build <project_name> To expose a route to the Quarkus application, enter the following command: oc expose svc <project_name> Verification To view a list of pods, enter the following command: oc get pods To retrieve the log output for your application's pod, enter the following command: oc logs -f <pod_name> Additional resources Red Hat build of OpenJDK applications in containers Route configuration 1.7. Red Hat build of Quarkus configuration properties for customizing deployments on OpenShift Container Platform You can customize your deployments on OpenShift Container Platform by defining optional configuration properties. You can configure your Red Hat build of Quarkus project in your applications.properties file or from the command line. Table 1.1. Quarkus configuration properties and their default values: Property Description Default quarkus.container-image.group The container image group. Must be set if the OpenShift Container Platform <project_name> is different from the username of the host system. USD{user.name} quarkus.container-image.registry The container registry to use. quarkus.kubernetes-client.trust-certs Kubernetes client certificate authentication. quarkus.kubernetes.deployment-target Deployment target platform. For example, openshift or knative . quarkus.native.container-build Builds a native Linux executable by using a container runtime. Docker is used by default. false quarkus.native.container-runtime The container runtime used to build the image, for example, Docker. quarkus.openshift.build-strategy The deployment strategy. s2i quarkus.openshift.route.expose Exposes a route for the Quarkus application. false quarkus.native.debug.enabled Enables debugging and generates debug symbols in a separate .debug file. When this property is used with quarkus.native.container-build=true , Red Hat build of Quarkus only supports Red Hat Enterprise Linux or other Linux distributions. The Red Hat Enterprise Linux and other Linux distributions contain the binutils package, which installs the objcopy utility to split the debug information from the native image. false 1.8. Additional resources Getting started with Quarkus OpenJDK Software Downloads Revised on 2025-02-28 13:38:59 UTC
[ "<dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-openshift</artifactId> </dependency>", "./mvnw quarkus:add-extension -Dextensions=\"io.quarkus:quarkus-openshift\"", "quarkus extension add 'quarkus-openshift'", "login", "project -q", "project <project_name>", "new-project <project_name>", "quarkus.openshift.build-strategy=docker", "quarkus.kubernetes-client.trust-certs=true", "quarkus.openshift.route.expose=true", "quarkus.openshift.jvm-dockerfile=<path_to_your_dockerfile>", "quarkus.openshift.jvm-dockerfile=src/main/resources/Dockerfile.custom-jvm", "./mvnw clean package -Dquarkus.openshift.deploy=true", "get pods", "NAME READY STATUS RESTARTS AGE openshift-helloworld-1-build 0/1 Completed 0 11m openshift-helloworld-1-deploy 0/1 Completed 0 10m openshift-helloworld-1-gzzrx 1/1 Running 0 10m", "logs -f openshift-helloworld-1-gzzrx", "Starting the Java application using /opt/jboss/container/java/run/run-java.sh INFO exec -a \"java\" java -Dquarkus.http.host=0.0.0.0 -Djava.util.logging.manager=org.jboss.logmanager.LogManager -XX:MaxRAMPercentage=50.0 -XX:+UseParallelGC -XX:MinHeapFreeRatio=10 -XX:MaxHeapFreeRatio=20 -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90 -XX:+ExitOnOutOfMemoryError -cp \".\" -jar /deployments/quarkus-run.jar __ ____ __ _____ ___ __ ____ ______ --/ __ \\/ / / / _ | / _ \\/ //_/ / / / __/ -/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\\ --\\___\\_\\____/_/ |_/_/|_/_/|_|\\____/___/ 2024-06-27 17:13:25,254 INFO [io.quarkus] (main) getting-started 1.0.0-SNAPSHOT on JVM (powered by Quarkus 3.8.6.SP3-redhat-00002) started in 0.653s. Listening on: http://0.0.0.0:8080 2024-06-27 17:13:25,281 INFO [io.quarkus] (main) Profile prod activated. 2024-06-27 17:13:25,281 INFO [io.quarkus] (main) Installed features: [cdi, kubernetes, resteasy-reactive, smallrye-context-propagation, vertx]", "get svc", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE openshift-helloworld ClusterIP 172.30.64.57 <none> 80/TCP 14m", "get routes", "NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD openshift-helloworld openshift-helloworld-username-dev.apps.sandbox-m2.ll9k.p1.openshiftapps.com openshift-helloworld http None", "quarkus.openshift.build-strategy=docker", "quarkus.native.container-build=true", "quarkus.kubernetes-client.trust-certs=true", "quarkus.openshift.route.expose=true", "quarkus.openshift.native-dockerfile=<path_to_your_dockerfile>", "quarkus.openshift.jvm-dockerfile=src/main/docker/Dockerfile.custom-native", "quarkus.native.container-runtime=podman", "quarkus.native.container-runtime=docker", "./mvnw clean package -Pnative -Dquarkus.openshift.deploy=true", "get is 1 get pods 2 get svc 3", "logs -f <pod_name>", "<maven.compiler.source>17</maven.compiler.source> <maven.compiler.target>17</maven.compiler.target>", "./mvnw clean package", "MAVEN_S2I_ARTIFACT_DIRS=target/quarkus-app S2I_SOURCE_DEPLOYMENTS_FILTER=app lib quarkus quarkus-run.jar JAVA_OPTIONS=-Dquarkus.http.host=0.0.0.0 AB_JOLOKIA_OFF=true JAVA_APP_JAR=/deployments/quarkus-run.jar", "import-image ubi8/openjdk-17 --from=registry.access.redhat.com/ubi8/openjdk-17 --confirm", "new-app registry.access.redhat.com/ubi8/openjdk-17~<git_path> --name=<project_name>", "start-build <project_name>", "expose svc <project_name>", "get pods", "logs -f <pod_name>", "<maven.compiler.source>21</maven.compiler.source> <maven.compiler.target>21</maven.compiler.target>", "./mvnw clean package", "MAVEN_S2I_ARTIFACT_DIRS=target/quarkus-app S2I_SOURCE_DEPLOYMENTS_FILTER=app lib quarkus quarkus-run.jar JAVA_OPTIONS=-Dquarkus.http.host=0.0.0.0 AB_JOLOKIA_OFF=true JAVA_APP_JAR=/deployments/quarkus-run.jar", "import-image ubi8/openjdk-21 --from=registry.access.redhat.com/ubi8/openjdk-21 --confirm", "new-app registry.access.redhat.com/ubi8/openjdk-21~<git_path> --name=<project_name>", "start-build <project_name>", "expose svc <project_name>", "get pods", "logs -f <pod_name>" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.8/html/deploying_your_red_hat_build_of_quarkus_applications_to_openshift_container_platform/assembly_quarkus-openshift_quarkus-openshift
Chapter 1. Eclipse 4.18
Chapter 1. Eclipse 4.18 Red Hat Developer Tools on Red Hat Enterprise Linux 7 is an offering for developers on the RHEL platform that includes Eclipse 4.18, which is based on the Eclipse Foundation's 2020-12 release train. The Eclipse development environment provides tools for each phase of the development process. Eclipse 4.18 on RHEL 7 supports Java development. To learn more about Eclipse, see the main Eclipse foundation page . Sample Eclipse session Eclipse provides a graphical development environment and is therefore an alternative to using the command-line interface. For an overview of how to develop applications for Red Hat JBoss Middleware or for support of OpenShift Tools, see Red Hat Developer Studio . 1.1. Enabling access to Eclipse RPMs on Red Hat Enterprise Linux 7 Eclipse is part of the Red Hat Developer Tools content set for RHEL 7. To install Eclipse, enable the Red Hat Developer Tools, Red Hat Software Collections, and Optional repositories using the Red Hat Subscription Management utility. Prerequisites The host must be registered and attached to a subscription. For more information on registering your system using Red Hat Subscription Management and associating it with subscriptions, see the Red Hat Subscription Management collection of guides. Procedure Choose the system variant , either workstation or server , to use in the following commands. Red Hat recommends to choose server for access to the widest range of development tools. Enable the rhel-7- variant -devtools-rpms repository to access Red Hat Developer Tools: Enable the rhel- variant -rhscl-7-rpms repository to access Red Hat Software Collections: Enable the rhel-7- variant -optional-rpms repository to access additional components: Optional: Enabling the Red Hat Developer Tools debuginfo repositories The Red Hat Developer Tools offering also provides debuginfo packages for all architecture-dependent RPMs in the repositories. These packages are useful for core-file analysis and for debugging Eclipse itself. Procedure Enable the Red Hat Developer Tools debuginfo repositories and replace variant with the Red Hat Enterprise Linux system variant ( server or workstation ): Enable the Red Hat Software Collections debuginfo repository: Additional resources For details on installing, understanding, and using the debuginfo packages, refer to Debugging a Running Application . For more information on registering your system using Red Hat Subscription Management and associating it with subscriptions, see the Red Hat Subscription Management collection of guides. For detailed instructions on managing a subscription to Red Hat Software Collections, see the Red Hat Developer Toolset User Guide Section 1.4. Getting Access to Red Hat Developer Toolset . 1.2. Installing Eclipse The following section describes how to install Eclipse. Note Eclipse is available only on the AMD64 and Intel 64 architecture. Prerequisites On RHEL 7, the repositories must be enabled as per Section 1.1, "Enabling access to Eclipse RPMs on Red Hat Enterprise Linux 7" . Procedure On RHEL 7, run the following command: 1.2.1. Installing additional Eclipse components Eclipse 4.18 on RHEL 7 supports Java development. To install more components from the upstream repositories, for example to support the C and C++ languages, use the Install New Software wizard, Eclipse Marketplace Client, or the command-line interface. Note Installing additional Eclipse components is not possible without access to the internet. 1.2.1.1. Installing additional Eclipse components using the Install New Software wizard Procedure To use the Install New Software wizard for the installation of additional components, in the main menu click Help > Install New Software and follow the instructions on the screen. 1.2.1.2. Installing additional Eclipse components using Eclipse Marketplace To use the Marketplace Client for the installation of additional components, follow the instructions in Section 1.2.1.2.1, "Example: Installing C and C++ Development Tooling (CDT) using the Eclipse Marketplace Client" . 1.2.1.2.1. Example: Installing C and C++ Development Tooling (CDT) using the Eclipse Marketplace Client Procedure From the main menu, select Help > Eclipse Marketplace . In Eclipse Marketplace, use the Find field to search for the wanted component, in this case CDT, and press Go . Click the Install button to start the installation and follow the instructions on the screen. 1.2.1.3. Installing additional Eclipse components using the command-line interface Red Hat recommends using Eclipse Marketplace or the Install New Software wizard to install additional components to Eclipse. However, it is possible to install components from the command line using the p2 director application. To use the command-line interface for the installation of additional components, follow the instructions in Section 1.2.1.3.1, "Example: Installing Eclipse C and C++ Development Tools using the command-line interface" . 1.2.1.3.1. Example: Installing Eclipse C and C++ Development Tools using the command-line interface Prerequisites Eclipse is not running. Procedure In the command-line interface, run the following command: Start Eclipse. Eclipse C/C++ Development Tools is installed. Warning Running the p2 director application as root causes significant problems for the RPM consistency. Never run the p2 director application as root. Additional resources For a list of available components, see Section 1.4, "Eclipse Components" . For further information on the p2 director application, see Installing software using the p2 director application in the online documentation or the built-in help system of Eclipse. 1.3. Starting Eclipse 1.3.1. Starting Eclipse from the GUI To start Eclipse from the GUI, complete the following steps: Click Applications > Programming > Red Hat Eclipse . 1.3.2. Starting Eclipse from the command-line interface To start Eclipse from the command-line, type the following at a shell prompt: On RHEL 7: While starting, Eclipse prompts you to select a workspace directory for your projects. You can use ~/workspace/ , the default option, or click Browse and select a custom directory. You can also select Use this as the default and do not ask again to prevent Eclipse from displaying this dialog box again. Click OK to confirm the selection and proceed with the start. 1.4. Eclipse Components The Eclipse development environment is provided as a set of RPM packages. The set contains the following Eclipse components: Table 1.1. Eclipse Components on RHEL 7 Package Description rh-eclipse-eclipse-egit EGit, a team provider for Eclipse, provides features and plug-ins for interaction with Git repositories. rh-eclipse-eclipse-emf The Eclipse Modeling Framework (EMF) enables you to build applications based on a structured data model. rh-eclipse-eclipse-gef The Graphical Editing Framework (GEF) enables you to create a rich graphical editor from an existing application model. rh-eclipse-eclipse-jdt The Eclipse Java development tools (JDT) plug-in. rh-eclipse-eclipse-jgit JGit, a Java implementation of the Git revision control system. rh-eclipse-eclipse-mpc The Eclipse Marketplace Client. rh-eclipse-eclipse-pde The Plugin Development Environment for developing Eclipse plug-ins. rh-eclipse-eclipse-subclipse Subclipse, a team provider for Eclipse allows you to interact with Subversion repositories. rh-eclipse-eclipse-webtools The Eclipse Webtools plug-ins. Additional resources A detailed description of Eclipse and all its features is beyond the scope of this document. For more information, see the following resources. Installed documentation Eclipse includes a built-in help system that provides extensive documentation for each integrated feature and tool. It is accessible from Eclipse's main menu: Help > Help Contents . Other resources For a list of selected features and improvements in the latest version of the Eclipse development environment, see Section 1.5, "Changes in Eclipse 4.18" . 1.5. Changes in Eclipse 4.18 Eclipse 4.18 ships with Red Hat Developer Tools and plug-ins from the 2020-12 release train that provide a number of bug fixes and feature enhancements. This section lists notable new features and compatibility changes in this release. Significant package updates on RHEL 7 eclipse 4.17 4.18 Eclipse IDE and JDT/PDE plug-ins have been updated to version 4.18. For a more complete list of changes, see the Eclipse 4.18 - New and Noteworthy page. Notable enhancements include: In the Console preference page it is now possible to select the new preference "Enable word wrap". In the Appearance preference page the new "System" theme is now available. It uses system colors to integrate smoothly into your operating system or operating system theme. Eclipse JDT has been updated to use JUnit 5.7. New clean up options and code formatting options have been added to Java Development Tools. In the Arguments tab for Java-based launch configurations (Java Application, JUnit, and others), you can now select the new checkbox to write arguments into an @argfile . eclipse-egit/jgit 5.9.0 5.10.0 The Git integration plug-ins have been updated to version 5.10.0. For details, see the upstream EGit 5.10.0 release notes and JGit 5.10.0 release notes . eclipse-m2e 1.16.2 1.17.1 The Maven integration plug-in has been updated to version 1.17.1. Deprecated functionality on RHEL 7 Python development is no longer supported as part of Eclipse. It can be installed additionally from the Install New Software wizard or Eclipse Marketplace. Additional resources For details on how to use the new features, see Eclipse Installed documentation . 1.6. Known issues in Eclipse 4.18 This section details the known issues in Eclipse 4.18. Known issues on RHEL 7 Initializing Eclipse Error Reporting System error This error occurs when running a workspace created in an older version of Eclipse. To work around this problem, start Eclipse with the -clean option to clear its dependency resolution cache: Eclipse will start without this error message. NullPointerExceptions NullPointerExceptions can occur when you install a plug-in from a third-party update site. In that case, Eclipse fails to start with a NullPointerException in the workspace log file. To work around this problem, restart Eclipse with the -clean option to clear its dependency resolution cache: On RHEL 7: Eclipse will start normally. The rh-eclipse-tycho package conflicts with the same package from earlier collections For example: rh-eclipse48-tycho : As a result, the installation of the rh-eclipse-tycho package may fail when the rh-eclipse48-tycho package is already installed. You only need Tycho if you want to build or rebuild Eclipse or its plug-ins need Tycho. If needed, uninstall the rh-eclipse48-tycho package before installing the rh-eclipse-tycho package using this command: The installation of the rh-eclipse-tycho package will now succeed. The rh-eclipse-scldevel package conflicts with packages from earlier collections For example: rh-maven36-scldevel : As a result, the installation of the rh-maven36-scldevel package may fail when the rh-maven35-scldevel package is already installed. To solve this problem, uninstall the rh-maven35-scldevel package before installing the new version of rh-eclipse-scldevel using this command: The installation of rh-eclipse-scldevel will now succeed. Incompatibilities between Eclipse Subclipse and base RHEL Subversion Working copies of Subversion repositories created with Eclipse Subclipse are incompatible with the base RHEL version of Subversion. Using the svn command on such working copies may result in the following error: To work around this problem, use the pure Java implementation of Subversion used by Eclipse Subclipse on the command line: Now, use the jsvn command anywhere you would normally use the svn command: Lambda expression evaluation failed due to unexpected argument types During compilation, some lambda expressions used in conditional breakpoints or Expression view are falsely assigned an Object variable type. For example, the expression lotteryNumbers.stream().anyMatch(a a >= 42) evaluates the following error message:
[ "subscription-manager repos --enable rhel-7- variant -devtools-rpms", "subscription-manager repos --enable rhel- variant -rhscl-7-rpms", "subscription-manager repos --enable rhel-7- variant -optional-rpms", "subscription-manager repos --enable rhel-7- variant -devtools-debug-rpms", "subscription-manager repos --enable rhel-__variant__-rhscl-7-debug-rpms", "yum install rh-eclipse", "scl enable rh-eclipse 'eclipse -noSplash -application org.eclipse.equinox.p2.director -repository https://download.eclipse.org/releases/2020-12 -i org.eclipse.cdt.feature.group'", "scl enable rh-eclipse eclipse", "scl enable rh-eclipse \"eclipse -clean\"", "scl enable rh-eclipse \"eclipse -clean\"", "yum remove rh-eclipse48-tycho", "yum remove rh-maven35-scldevel", "svn up svn: E155021: This client is too old to work with the working copy", "yum install rh-eclipse-svnkit-cli # Command line support for SVNKit", "jsvn up Updating '.': At revision 16476.", "The operator >= is undefined for the argument type(s) Object, int" ]
https://docs.redhat.com/en/documentation/red_hat_developer_tools/1/html/using_eclipse_4.18/eclipse_4_18
Chapter 14. Uninstalling Logging
Chapter 14. Uninstalling Logging You can remove logging from your OpenShift Dedicated cluster by removing installed Operators and related custom resources (CRs). 14.1. Uninstalling the logging You can stop aggregating logs by deleting the Red Hat OpenShift Logging Operator and the ClusterLogging custom resource (CR). Prerequisites You have administrator permissions. You have access to the Administrator perspective of the OpenShift Dedicated web console. Procedure Go to the Administration Custom Resource Definitions page, and click ClusterLogging . On the Custom Resource Definition Details page, click Instances . Click the options menu to the instance, and click Delete ClusterLogging . Go to the Administration Custom Resource Definitions page. Click the options menu to ClusterLogging , and select Delete Custom Resource Definition . Warning Deleting the ClusterLogging CR does not remove the persistent volume claims (PVCs). To delete the remaining PVCs, persistent volumes (PVs), and associated data, you must take further action. Releasing or deleting PVCs can delete PVs and cause data loss. If you have created a ClusterLogForwarder CR, click the options menu to ClusterLogForwarder , and then click Delete Custom Resource Definition . Go to the Operators Installed Operators page. Click the options menu to the Red Hat OpenShift Logging Operator, and then click Uninstall Operator . Optional: Delete the openshift-logging project. Warning Deleting the openshift-logging project deletes everything in that namespace, including any persistent volume claims (PVCs). If you want to preserve logging data, do not delete the openshift-logging project. Go to the Home Projects page. Click the options menu to the openshift-logging project, and then click Delete Project . Confirm the deletion by typing openshift-logging in the dialog box, and then click Delete . 14.2. Deleting logging PVCs To keep persistent volume claims (PVCs) for reuse with other pods, keep the labels or PVC names that you need to reclaim the PVCs. If you do not want to keep the PVCs, you can delete them. If you want to recover storage space, you can also delete the persistent volumes (PVs). Prerequisites You have administrator permissions. You have access to the Administrator perspective of the OpenShift Dedicated web console. Procedure Go to the Storage Persistent Volume Claims page. Click the options menu to each PVC, and select Delete Persistent Volume Claim . 14.3. Uninstalling Loki Prerequisites You have administrator permissions. You have access to the Administrator perspective of the OpenShift Dedicated web console. If you have not already removed the Red Hat OpenShift Logging Operator and related resources, you have removed references to LokiStack from the ClusterLogging custom resource. Procedure Go to the Administration Custom Resource Definitions page, and click LokiStack . On the Custom Resource Definition Details page, click Instances . Click the options menu to the instance, and then click Delete LokiStack . Go to the Administration Custom Resource Definitions page. Click the options menu to LokiStack , and select Delete Custom Resource Definition . Delete the object storage secret. Go to the Operators Installed Operators page. Click the options menu to the Loki Operator, and then click Uninstall Operator . Optional: Delete the openshift-operators-redhat project. Important Do not delete the openshift-operators-redhat project if other global Operators are installed in this namespace. Go to the Home Projects page. Click the options menu to the openshift-operators-redhat project, and then click Delete Project . Confirm the deletion by typing openshift-operators-redhat in the dialog box, and then click Delete . 14.4. Uninstalling Elasticsearch Prerequisites You have administrator permissions. You have access to the Administrator perspective of the OpenShift Dedicated web console. If you have not already removed the Red Hat OpenShift Logging Operator and related resources, you must remove references to Elasticsearch from the ClusterLogging custom resource. Procedure Go to the Administration Custom Resource Definitions page, and click Elasticsearch . On the Custom Resource Definition Details page, click Instances . Click the options menu to the instance, and then click Delete Elasticsearch . Go to the Administration Custom Resource Definitions page. Click the options menu to Elasticsearch , and select Delete Custom Resource Definition . Delete the object storage secret. Go to the Operators Installed Operators page. Click the options menu to the OpenShift Elasticsearch Operator, and then click Uninstall Operator . Optional: Delete the openshift-operators-redhat project. Important Do not delete the openshift-operators-redhat project if other global Operators are installed in this namespace. Go to the Home Projects page. Click the options menu to the openshift-operators-redhat project, and then click Delete Project . Confirm the deletion by typing openshift-operators-redhat in the dialog box, and then click Delete . 14.5. Deleting Operators from a cluster using the CLI Cluster administrators can delete installed Operators from a selected namespace by using the CLI. Prerequisites You have access to an OpenShift Dedicated cluster using an account with dedicated-admin permissions. The OpenShift CLI ( oc ) is installed on your workstation. Procedure Ensure the latest version of the subscribed operator (for example, serverless-operator ) is identified in the currentCSV field. USD oc get subscription.operators.coreos.com serverless-operator -n openshift-serverless -o yaml | grep currentCSV Example output currentCSV: serverless-operator.v1.28.0 Delete the subscription (for example, serverless-operator ): USD oc delete subscription.operators.coreos.com serverless-operator -n openshift-serverless Example output subscription.operators.coreos.com "serverless-operator" deleted Delete the CSV for the Operator in the target namespace using the currentCSV value from the step: USD oc delete clusterserviceversion serverless-operator.v1.28.0 -n openshift-serverless Example output clusterserviceversion.operators.coreos.com "serverless-operator.v1.28.0" deleted Additional resources Reclaiming a persistent volume manually
[ "oc get subscription.operators.coreos.com serverless-operator -n openshift-serverless -o yaml | grep currentCSV", "currentCSV: serverless-operator.v1.28.0", "oc delete subscription.operators.coreos.com serverless-operator -n openshift-serverless", "subscription.operators.coreos.com \"serverless-operator\" deleted", "oc delete clusterserviceversion serverless-operator.v1.28.0 -n openshift-serverless", "clusterserviceversion.operators.coreos.com \"serverless-operator.v1.28.0\" deleted" ]
https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/logging/cluster-logging-uninstall
Chapter 12. Supported Configurations
Chapter 12. Supported Configurations Supported configurations for the Streams for Apache Kafka 2.9 release. 12.1. Supported platforms The following platforms are tested for Streams for Apache Kafka 2.9 running with Kafka on the version of OpenShift stated. Platform Version Architecture Red Hat OpenShift Container Platform 4.14 and later x86_64, ppc64le (IBM Power), s390x (IBM Z and IBM(R) LinuxONE), aarch64 (64-bit ARM) Red Hat OpenShift Container Platform disconnected environment Latest x86_64, ppc64le (IBM Power), s390x (IBM Z and IBM(R) LinuxONE), aarch64 (64-bit ARM) Red Hat OpenShift Dedicated Latest x86_64 Microsoft Azure Red Hat OpenShift (ARO) Latest x86_64 Red Hat OpenShift Service on AWS (ROSA) Includes ROSA with hosted control planes (HCP) Latest x86_64 Red Hat build of MicroShift Latest x86_64 Unsupported features Red Hat MicroShift does not support Kafka Connect's build configuration for building container images with connectors. IBM Z and IBM(R) LinuxONE s390x architecture does not support Streams for Apache Kafka OPA integration. FIPS compliance Streams for Apache Kafka is designed for FIPS. Streams for Apache Kafka container images are based on RHEL 9.2, which contains cryptographic modules submitted to NIST for approval. To check which versions of RHEL are approved by the National Institute of Standards and Technology (NIST), see the Cryptographic Module Validation Program on the NIST website. Red Hat OpenShift Container Platform is designed for FIPS. When running on RHEL or RHEL CoreOS booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries submitted to NIST for FIPS validation only on the x86_64, ppc64le (IBM Power), s390x (IBM Z), and aarch64 (64-bit ARM) architectures. For more information about the NIST validation program, see Cryptographic Module Validation Program . For the latest NIST status for the individual versions of the RHEL cryptographic libraries submitted for validation, see Compliance Activities and Government Standards . 12.2. Supported clients Only client libraries built by Red Hat are supported for Streams for Apache Kafka. Currently, Streams for Apache Kafka only provides a Java client library, which is tested and supported on kafka-clients-3.8.0.redhat-00007 and newer. Clients are supported for use with Streams for Apache Kafka 2.9 on the following operating systems and architectures: Operating System Architecture JVM RHEL and UBI 8 and 9 x86, amd64, ppc64le (IBM Power), s390x (IBM Z and IBM(R) LinuxONE), aarch64 (64-bit ARM) Java 11 (deprecated) and Java 17 Clients are tested with Open JDK 11 and 17, though Java 11 is deprecated in Streams for Apache Kafka 2.7 and will be removed in version 3.0. The IBM JDK is supported but not regularly tested against during each release. Oracle JDK 11 is not supported. Support for Red Hat Universal Base Image (UBI) versions correspond to the same RHEL version. 12.3. Supported Apache Kafka ecosystem In Streams for Apache Kafka, only the following components released directly from the Apache Software Foundation are supported: Apache Kafka Broker Apache Kafka Connect Apache MirrorMaker Apache MirrorMaker 2 Apache Kafka Java Producer, Consumer, Management clients, and Kafka Streams Apache ZooKeeper Note Apache ZooKeeper is supported solely as an implementation detail of Apache Kafka and should not be modified for other purposes. 12.4. Additional supported features Kafka Bridge Drain Cleaner Cruise Control Distributed Tracing Streams for Apache Kafka Console Streams for Apache Kafka Proxy (technology preview) Note Streams for Apache Kafka Proxy is not production-ready. For the technology preview, it has been tested on x86 and amd64 only. See also, Chapter 14, Supported integration with Red Hat products . 12.5. Console supported browsers Streams for Apache Kafka Console is supported on the most recent stable releases of Firefox, Edge, Chrome and Webkit-based browsers. 12.6. Subscription limits and core usage Cores used by Red Hat components and product operators do not count against subscription limits. Additionally, cores or vCPUs allocated to ZooKeeper nodes are excluded from subscription compliance calculations and do not count towards a subscription. 12.7. Storage requirements Streams for Apache Kafka has been tested with block storage and is compatible with the XFS and ext4 file systems, both of which are commonly used with Kafka. File storage options, such as NFS, are not compatible. Additional resources For information on the supported configurations for the latest LTS release, see the Streams for Apache Kafka LTS Support Policy .
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/release_notes_for_streams_for_apache_kafka_2.9_on_openshift/ref-supported-configurations-str
Chapter 6. GNOME Shell Extensions
Chapter 6. GNOME Shell Extensions GNOME Shell in Red Hat Enterprise Linux 7 does not support applets, which were used to customize the default GNOME 2 interface in Red Hat Enterprise Linux 5 and 6. GNOME 3 replaces applets with GNOME Shell extensions . Extensions can modify the default GNOME Shell interface and its parts, such as window management and application launching. 6.1. Replacement for the Clock Applet GNOME 2 in Red Hat Enterprise Linux 5 and 6 featured the Clock applet, which provided access to the date, time, and calendar from the GNOME 2 Panel. In Red Hat Enterprise Linux 7, that applet is replaced by the Clocks application, which is provided by the gnome-clocks package. The user can access that application by clicking the calendar on GNOME Shell's top bar and selecting Open Clocks . Figure 6.1. Open Clocks Getting More Information See Section 11.1, "What Are GNOME Shell Extensions?" for more detailed information on what GNOME Shell extensions are and how to configure and manage them.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/desktop_migration_and_administration_guide/migrating-gnome-shell-extensions
Chapter 4. Environment variables for .NET 8.0
Chapter 4. Environment variables for .NET 8.0 The .NET images support several environment variables to control the build behavior of your .NET application. You can set these variables as part of the build configuration, or add them to the .s2i/environment file in the application source code repository. Variable Name Description Default DOTNET_STARTUP_PROJECT Selects the project to run. This must be a project file (for example, csproj or fsproj ) or a folder containing a single project file. . DOTNET_ASSEMBLY_NAME Selects the assembly to run. This must not include the .dll extension. Set this to the output assembly name specified in csproj (PropertyGroup/AssemblyName). The name of the csproj file DOTNET_PUBLISH_READYTORUN When set to true , the application will be compiled ahead of time. This reduces startup time by reducing the amount of work the JIT needs to perform when the application is loading. false DOTNET_RESTORE_SOURCES Specifies the space-separated list of NuGet package sources used during the restore operation. This overrides all of the sources specified in the NuGet.config file. This variable cannot be combined with DOTNET_RESTORE_CONFIGFILE . DOTNET_RESTORE_CONFIGFILE Specifies a NuGet.Config file to be used for restore operations. This variable cannot be combined with DOTNET_RESTORE_SOURCES . DOTNET_TOOLS Specifies a list of .NET tools to install before building the app. It is possible to install a specific version by post pending the package name with @<version> . DOTNET_NPM_TOOLS Specifies a list of NPM packages to install before building the application. DOTNET_TEST_PROJECTS Specifies the list of test projects to test. This must be project files or folders containing a single project file. dotnet test is invoked for each item. DOTNET_CONFIGURATION Runs the application in Debug or Release mode. This value should be either Release or Debug . Release DOTNET_VERBOSITY Specifies the verbosity of the dotnet build commands. When set, the environment variables are printed at the start of the build. This variable can be set to one of the msbuild verbosity values ( q[uiet] , m[inimal] , n[ormal] , d[etailed] , and diag[nostic] ). HTTP_PROXY, HTTPS_PROXY Configures the HTTP or HTTPS proxy used when building and running the application, respectively. DOTNET_RM_SRC When set to true , the source code will not be included in the image. DOTNET_SSL_DIRS Deprecated : Use SSL_CERT_DIR instead SSL_CERT_DIR Specifies a list of folders or files with additional SSL certificates to trust. The certificates are trusted by each process that runs during the build and all processes that run in the image after the build (including the application that was built). The items can be absolute paths (starting with / ) or paths in the source repository (for example, certificates). NPM_MIRROR Uses a custom NPM registry mirror to download packages during the build process. ASPNETCORE_URLS This variable is set to http://*:8080 to configure ASP.NET Core to use the port exposed by the image. Changing this is not recommended. http://*:8080 DOTNET_RESTORE_DISABLE_PARALLEL When set to true , disables restoring multiple projects in parallel. This reduces restore timeout errors when the build container is running with low CPU limits. false DOTNET_INCREMENTAL When set to true , the NuGet packages will be kept so they can be re-used for an incremental build. false DOTNET_PACK When set to true , creates a tar.gz file at /opt/app-root/app.tar.gz that contains the published application.
null
https://docs.redhat.com/en/documentation/net/8.0/html/getting_started_with_.net_on_openshift_container_platform/environmental-variables-for-dotnet_getting-started-with-dotnet-on-openshift
Chapter 4. Rotate your certificates and keys
Chapter 4. Rotate your certificates and keys As a systems administrator, you can proactively rotate the certificates and signer keys used by the Red Hat Trusted Artifact Signer (RHTAS) service running on Red Hat OpenShift. Rotating your keys regularly can prevent key tampering, and theft. These procedures guide you through expiring your old certificates and signer keys, and replacing them with a new certificate and signer key for the underlying services that make up RHTAS. You can rotate keys and certificates for the following services: Rekor Certificate Transparency log Fulcio Timestamp Authority 4.1. Rotating the Rekor signer key You can proactively rotate Rekor's signer key by using the sharding feature to freeze the log tree, and create a new log tree with a new signer key. This procedure walks you through expiring your old Rekor signer key, and replacing it with a new signer key for Red Hat Trusted Artifact Signer (RHTAS) to use. Expiring your old Rekor signer key still allows you to verify artifacts signed by the old key. Important This procedure requires downtime to the Rekor service. Prerequisites Installation of the RHTAS operator running on Red Hat OpenShift Container Platform. A running Securesign instance. A workstation with the oc , openssl , and cosign binaries installed. Procedure Download the rekor-cli binary from the OpenShift cluster to your workstation. Login to the OpenShift web console. From the home page, click the ? icon, click Command line tools , go to the rekor-cli download section, and click the link for your platform. Open a terminal on your workstation, decompress the binary .gz file, and set the execute bit: Example USD gunzip rekor-cli-amd64.gz USD chmod +x rekor-cli-amd64 Move and rename the binary to a location within your USDPATH environment: Example USD sudo mv rekor-cli-amd64 /usr/local/bin/rekor-cli Download the tuftool binary from the OpenShift cluster to your workstation. Important The tuftool binary is only available for Linux operating systems. From the home page, click the ? icon, click Command line tools , go to the tuftool download section, and click the link for your platform. From a terminal on your workstation, decompress the binary .gz file, and set the execute bit: Example USD gunzip tuftool-amd64.gz USD chmod +x tuftool-amd64 Move and rename the binary to a location within your USDPATH environment: Example USD sudo mv tuftool-amd64 /usr/local/bin/tuftool Log in to OpenShift from the command line: Syntax oc login --token= TOKEN --server= SERVER_URL_AND_PORT Example USD oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443 Note You can find your login token and URL for use on the command line from the OpenShift web console. Log in to the OpenShift web console. Click your user name, and click Copy login command . Offer your user name and password again, if asked, and click Display Token to view the command. Switch to the RHTAS project: Example USD oc project trusted-artifact-signer Get the Rekor URL: Example USD export REKOR_URL=USD(oc get rekor -o jsonpath='{.items[0].status.url}') Get the log tree identifier for the active shard: Example USD export OLD_TREE_ID=USD(rekor-cli loginfo --rekor_server USDREKOR_URL --format json | jq -r .TreeID) Scale down the Rekor instance, and set the log tree to the DRAINING state: Example USD oc run --image registry.redhat.io/rhtas/updatetree-rhel9:1.1.0 --restart=Never --attach=true --rm=true -q -- updatetree --admin_server=trillian-logserver:8091 --tree_id=USD{OLD_TREE_ID} --tree_state=DRAINING While draining, the tree log will not accept any new entries. Watch and wait for the queue to empty. Important You must wait for the queues to be empty before proceeding to the step. If leaves are still integrating while draining, then freezing the log tree during this process can cause the log path to exceed the maximum merge delay (MMD). Freeze the log tree: Example USD oc run --image registry.redhat.io/rhtas/updatetree-rhel9:1.1.0 --restart=Never --attach=true --rm=true -q -- updatetree --admin_server=trillian-logserver:8091 --tree_id=USD{OLD_TREE_ID} --tree_state=FROZEN Get the length of the frozen log tree: Example USD export OLD_SHARD_LENGTH=USD(rekor-cli loginfo --rekor_server USDREKOR_URL --format json | jq -r .ActiveTreeSize) Get Rekor's public key for the old shard: Example USD export OLD_PUBLIC_KEY=USD(curl -s USDREKOR_URL/api/v1/log/publicKey | base64 | tr -d '\n') Create a new log tree: Example USD export NEW_TREE_ID=USD(oc run createtree --image registry.redhat.io/rhtas/createtree-rhel9:1.1.0 --restart=Never --attach=true --rm=true -q -- -logtostderr=false --admin_server=trillian-logserver:8091 --display_name=rekor-tree) Now you have two log trees, one frozen tree, and a new tree that will become the active shard. Create a new private key: Example USD openssl ecparam -genkey -name secp384r1 -noout -out new-rekor.pem Important The new key must have a unique file name. Create a new secret resource with the new signer key: Example USD oc create secret generic rekor-signer-key --from-file=private=new-rekor.pem Update the Securesign Rekor configuration with the new tree identifier and the old sharding information: Example USD read -r -d '' SECURESIGN_PATCH_1 <<EOF [ { "op": "replace", "path": "/spec/rekor/treeID", "value": USDNEW_TREE_ID }, { "op": "add", "path": "/spec/rekor/sharding/-", "value": { "treeID": USDOLD_TREE_ID, "treeLength": USDOLD_SHARD_LENGTH, "encodedPublicKey": "USDOLD_PUBLIC_KEY" } }, { "op": "replace", "path": "/spec/rekor/signer/keyRef", "value": {"name": "rekor-signer-key", "key": "private"} } ] EOF Note If you have /spec/rekor/signer/keyPasswordRef set with a value, then create a new separate update to remove it: Example USD read -r -d '' SECURESIGN_PATCH_2 <<EOF [ { "op": "remove", "path": "/spec/rekor/signer/keyPasswordRef" } ] EOF Apply this update after applying the first update. Update the Securesign instance: Example USD oc patch Securesign securesign-sample --type='json' -p="USDSECURESIGN_PATCH_1" Wait for the Rekor server to redeploy with the new signer key: Example USD oc wait pod -l app.kubernetes.io/name=rekor-server --for=condition=Ready Get the new public key: Example USD export NEW_KEY_NAME=new-rekor.pub USD curl USD(oc get rekor -o jsonpath='{.items[0].status.url}')/api/v1/log/publicKey -o USDNEW_KEY_NAME Configure The Update Framework (TUF) service to use the new Rekor public key. Set up your shell environment: Example USD export WORK="USD{HOME}/trustroot-example" USD export ROOT="USD{WORK}/root/root.json" USD export KEYDIR="USD{WORK}/keys" USD export INPUT="USD{WORK}/input" USD export TUF_REPO="USD{WORK}/tuf-repo" USD export TUF_SERVER_POD="USD(oc get pod --selector=app.kubernetes.io/component=tuf --no-headers -o custom-columns=":metadata.name")" Create a temporary TUF directory structure: Example USD mkdir -p "USD{WORK}/root/" "USD{KEYDIR}" "USD{INPUT}" "USD{TUF_REPO}" Download the TUF contents to the temporary TUF directory structure: Example USD oc extract --to "USD{KEYDIR}/" secret/tuf-root-keys USD oc cp "USD{TUF_SERVER_POD}:/var/www/html" "USD{TUF_REPO}" USD cp "USD{TUF_REPO}/root.json" "USD{ROOT}" Find the active Rekor signer key file name. Open the latest target file, for example, 1.target.json , within the local TUF repository. In this file you will find the active Rekor signer key file name, for example, rekor.pub . Set an environment variable with this active Rekor signer key file name: Example USD export ACTIVE_KEY_NAME=rekor.pub Update the Rekor signer key with the old public key: Example USD echo USDOLD_PUBLIC_KEY | base64 -d > USDACTIVE_KEY_NAME Expire the old Rekor signer key: Example USD tuftool rhtas \ --root "USD{ROOT}" \ --key "USD{KEYDIR}/snapshot.pem" \ --key "USD{KEYDIR}/targets.pem" \ --key "USD{KEYDIR}/timestamp.pem" \ --set-rekor-target "USD{ACTIVE_KEY_NAME}" \ --rekor-uri "https://rekor.rhtas" \ --rekor-status "Expired" \ --outdir "USD{TUF_REPO}" \ --metadata-url "file://USD{TUF_REPO}" Add the new Rekor signer key: Example USD tuftool rhtas \ --root "USD{ROOT}" \ --key "USD{KEYDIR}/snapshot.pem" \ --key "USD{KEYDIR}/targets.pem" \ --key "USD{KEYDIR}/timestamp.pem" \ --set-rekor-target "USD{NEW_KEY_NAME}" \ --rekor-uri "https://rekor.rhtas" \ --outdir "USD{TUF_REPO}" \ --metadata-url "file://USD{TUF_REPO}" Upload these changes to the TUF server: Example USD oc rsync "USD{TUF_REPO}/" "USD{TUF_SERVER_POD}:/var/www/html" Delete the working directory: Example USD rm -r USDWORK Update the cosign configuration with the updated TUF configuration: Example USD cosign initialize --mirror=USDTUF_URL --root=USDTUF_URL/root.json Now, you are ready to sign and verify your artifacts with the new Rekor signer key. 4.2. Rotating the Certificate Transparency log signer key You can proactively rotate Certificate Transparency (CT) log signer key by using the sharding feature to freeze the log tree, and create a new log tree with a new signer key. This procedure walks you through expiring your old CT log signer key, and replacing it with a new signer key for Red Hat Trusted Artifact Signer (RHTAS) to use. Expiring your old CT log signer key still allows you to verify artifacts signed by the old key. Prerequisites Installation of the RHTAS operator running on Red Hat OpenShift Container Platform. A running Securesign instance. A workstation with the oc , openssl , and cosign binaries installed. Procedure Download the tuftool binary from the OpenShift cluster to your workstation. Important The tuftool binary is only available for Linux operating systems. From the home page, click the ? icon, click Command line tools , go to the tuftool download section, and click the link for your platform. Open a terminal on your workstation, decompress the binary .gz file, and set the execute bit: Example USD gunzip tuftool-amd64.gz USD chmod +x tuftool-amd64 Move and rename the binary to a location within your USDPATH environment: Example USD sudo mv tuftool-amd64 /usr/local/bin/tuftool Log in to OpenShift from the command line: Syntax oc login --token= TOKEN --server= SERVER_URL_AND_PORT Example USD oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443 Note You can find your login token and URL for use on the command line from the OpenShift web console. Log in to the OpenShift web console. Click your user name, and click Copy login command . Offer your user name and password again, if asked, and click Display Token to view the command. Switch to the RHTAS project: Example USD oc project trusted-artifact-signer Make a backup of the current CT log configuration, and keys: Example USD export SERVER_CONFIG_NAME=USD(oc get ctlog -o jsonpath='{.items[0].status.serverConfigRef.name}') USD oc get secret USDSERVER_CONFIG_NAME -o jsonpath="{.data.config}" | base64 --decode > config.txtpb USD oc get secret USDSERVER_CONFIG_NAME -o jsonpath="{.data.fulcio-0}" | base64 --decode > fulcio-0.pem USD oc get secret USDSERVER_CONFIG_NAME -o jsonpath="{.data.private}" | base64 --decode > private.pem USD oc get secret USDSERVER_CONFIG_NAME -o jsonpath="{.data.public}" | base64 --decode > public.pem Capture the current tree identifier: Example USD export OLD_TREE_ID=USD(oc get ctlog -o jsonpath='{.items[0].status.treeID}') Set the log tree to the DRAINING state: Example USD oc run --image registry.redhat.io/rhtas/updatetree-rhel9:1.1.0 --restart=Never --attach=true --rm=true -q -- updatetree --admin_server=trillian-logserver:8091 --tree_id=USD{OLD_TREE_ID} --tree_state=DRAINING While draining, the tree log will not accept any new entries. Watch and wait for the queue to empty. Important You must wait for the queues to be empty before proceeding to the step. If leaves are still integrating while draining, then freezing the log tree during this process can cause the log path to exceed the maximum merge delay (MMD). Once the queue has been fully drained, freeze the log: Example USD oc run --image registry.redhat.io/rhtas/updatetree-rhel9:1.1.0 --restart=Never --attach=true --rm=true -q -- updatetree --admin_server=trillian-logserver:8091 --tree_id=USD{OLD_TREE_ID} --tree_state=FROZEN Create a new Merkle tree, and capture the new tree identifier: Example USD export NEW_TREE_ID=USD(kubectl run createtree --image registry.redhat.io/rhtas/createtree-rhel9:1.1.0 --restart=Never --attach=true --rm=true -q -- -logtostderr=false --admin_server=trillian-logserver:8091 --display_name=ctlog-tree) Generate a new certificate, along with new public and private keys: Example USD openssl ecparam -genkey -name prime256v1 -noout -out new-ctlog.pem USD openssl ec -in new-ctlog.pem -pubout -out new-ctlog-public.pem USD openssl ec -in new-ctlog.pem -out new-ctlog.pass.pem -des3 -passout pass:" CHANGE_ME " Replace CHANGE_ME with a new password. Important The certificate and new keys must have unique file names. Update the CT log configuration. Open the config.txtpb file for editing. For the frozen log, add the not_after_limit field to the frozen log entry, rename the prefix value to a unique name, and replace the old path to the private key with ctfe-keys/private-0 : Example Note You can get the current time value for seconds and nanoseconds, by running the following commands: date +%s , and date +%N . Important The not_after_limit field defines the end of the timestamp range for the frozen log only. Certificates beyond this point in time are no longer accepted for inclusion in this log. Copy and paste the frozen log config block, appending it to the configuration file to create a new entry. Change the following lines in the new config block. Set the log_id to the new tree identifier, change the prefix to trusted-artifact-signer , change the private_key path to ctfe-keys/private , remove the public_key line, and change not_after_limit to not_after_start and set the timestamp range: Example Add the NEW_TREE_ID , and replace CHANGE_ME with the new private key password. The password here must match the password used for generating the new private and public keys. Important The not_after_start field defines the beginning of the timestamp range inclusively. This means the log will start accepting certificates at this point in time. Create a new secret resource: Example USD oc create secret generic ctlog-config \ --from-file=config=config.txtpb \ --from-file=private=new-ctlog.pass.pem \ --from-file=public=new-ctlog-public.pem \ --from-file=fulcio-0=fulcio-0.pem \ --from-file=private-0=private.pem \ --from-file=public-0=public.pem \ --from-literal=password= CHANGE_ME Replace CHANGE_ME with the new private key password. Configure The Update Framework (TUF) service to use the new CT log public key. Set up your shell environment: Example USD export WORK="USD{HOME}/trustroot-example" USD export ROOT="USD{WORK}/root/root.json" USD export KEYDIR="USD{WORK}/keys" USD export INPUT="USD{WORK}/input" USD export TUF_REPO="USD{WORK}/tuf-repo" USD export TUF_SERVER_POD="USD(oc get pod --selector=app.kubernetes.io/component=tuf --no-headers -o custom-columns=":metadata.name")" Create a temporary TUF directory structure: Example USD mkdir -p "USD{WORK}/root/" "USD{KEYDIR}" "USD{INPUT}" "USD{TUF_REPO}" Download the TUF contents to the temporary TUF directory structure: Example USD oc extract --to "USD{KEYDIR}/" secret/tuf-root-keys USD oc cp "USD{TUF_SERVER_POD}:/var/www/html" "USD{TUF_REPO}" USD cp "USD{TUF_REPO}/root.json" "USD{ROOT}" Find the active CT log public key file name. Open the latest target file, for example, 1.targets.json , within the local TUF repository. In this target file you will find the active CT log public key file name, for example, ctfe.pub . Set an environment variable with this active CT log public key file name: Example USD export ACTIVE_CTFE_NAME=ctfe.pub Extract the active CT log public key from OpenShift: Example USD oc get secret USD(oc get ctlog securesign-sample -o jsonpath='{.status.publicKeyRef.name}') -o jsonpath='{.data.public}' | base64 -d > USDACTIVE_CTFE_NAME Expire the old CT log signer key: Example USD tuftool rhtas \ --root "USD{ROOT}" \ --key "USD{KEYDIR}/snapshot.pem" \ --key "USD{KEYDIR}/targets.pem" \ --key "USD{KEYDIR}/timestamp.pem" \ --set-ctlog-target "USDACTIVE_CTFE_NAME" \ --ctlog-uri "https://ctlog.rhtas" \ --ctlog-status "Expired" \ --outdir "USD{TUF_REPO}" \ --metadata-url "file://USD{TUF_REPO}" Add the new CT log signer key: Example USD tuftool rhtas \ --root "USD{ROOT}" \ --key "USD{KEYDIR}/snapshot.pem" \ --key "USD{KEYDIR}/targets.pem" \ --key "USD{KEYDIR}/timestamp.pem" \ --set-ctlog-target "new-ctlog-public.pem" \ --ctlog-uri "https://ctlog.rhtas" \ --outdir "USD{TUF_REPO}" \ --metadata-url "file://USD{TUF_REPO}" Upload these changes to the TUF server: Example USD oc rsync "USD{TUF_REPO}/" "USD{TUF_SERVER_POD}:/var/www/html" Delete the working directory: Example USD rm -r USDWORK Update the Securesign CT log configuration with the new tree identifier: Example USD read -r -d '' SECURESIGN_PATCH <<EOF [ { "op": "replace", "path": "/spec/ctlog/serverConfigRef", "value": {"name": "ctlog-config"} }, { "op": "replace", "path": "/spec/ctlog/treeID", "value": USDNEW_TREE_ID }, { "op": "replace", "path": "/spec/ctlog/privateKeyRef", "value": {"name": "ctlog-config", "key": "private"} }, { "op": "replace", "path": "/spec/ctlog/privateKeyPasswordRef", "value": {"name": "ctlog-config", "key": "password"} }, { "op": "replace", "path": "/spec/ctlog/publicKeyRef", "value": {"name": "ctlog-config", "key": "public"} } ] EOF Patch the Securesign instance: Example USD oc patch Securesign securesign-sample --type='json' -p="USDSECURESIGN_PATCH" Wait for the CT log server to redeploy: Example USD oc wait pod -l app.kubernetes.io/name=ctlog --for=condition=Ready Update the cosign configuration with the updated TUF configuration: Example USD cosign initialize --mirror=USDTUF_URL --root=USDTUF_URL/root.json Now, you are ready to sign and verify your artifacts with the new CT log signer key. 4.3. Rotating the Fulcio certificate You can proactively rotate the certificate used by the Fulcio service. This procedure walks you through expiring your old Fulcio certificate, and replacing it with a new certificate for Red Hat Trusted Artifact Signer (RHTAS) to use. Expiring your old Fulcio certificate still allows you to verify artifacts signed by the old certificate. Prerequisites Installation of the RHTAS operator running on Red Hat OpenShift Container Platform. A running Securesign instance. A workstation with the oc , openssl , and cosign binaries installed. Procedure Download the tuftool binary from the OpenShift cluster to your workstation. Important The tuftool binary is only available for Linux operating systems. From the home page, click the ? icon, click Command line tools , go to the tuftool download section, and click the link for your platform. Open a terminal on your workstation, decompress the binary .gz file, and set the execute bit: Example USD gunzip tuftool-amd64.gz USD chmod +x tuftool-amd64 Move and rename the binary to a location within your USDPATH environment: Example USD sudo mv tuftool-amd64 /usr/local/bin/tuftool Log in to OpenShift from the command line: Syntax oc login --token= TOKEN --server= SERVER_URL_AND_PORT Example USD oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443 Note You can find your login token and URL for use on the command line from the OpenShift web console. Log in to the OpenShift web console. Click your user name, and click Copy login command . Offer your user name and password again, if asked, and click Display Token to view the command. Switch to the RHTAS project: Example USD oc project trusted-artifact-signer Generate a new certificate, along with new public and private keys: Example USD openssl ecparam -genkey -name prime256v1 -noout -out new-fulcio.pem USD openssl ec -in new-fulcio.pem -pubout -out new-fulcio-public.pem USD openssl ec -in new-fulcio.pem -out new-fulcio.pass.pem -des3 -passout pass:" CHANGE_ME " USD openssl req -new -x509 -key new-fulcio.pass.pem -out new-fulcio.cert.pem Replace CHANGE_ME with a new password. Important The certificate and new keys must have unique file names. Create a new secret: Example USD oc create secret generic fulcio-config \ --from-file=private=new-fulcio.pass.pem \ --from-file=cert=new-fulcio.cert.pem \ --from-literal=password= CHANGE_ME Replace CHANGE_ME with a new password. Note The password here must match the password used for generating the new private and public keys. Configure The Update Framework (TUF) service to use the new Fulcio certificate. Set up your shell environment: Example USD export WORK="USD{HOME}/trustroot-example" USD export ROOT="USD{WORK}/root/root.json" USD export KEYDIR="USD{WORK}/keys" USD export INPUT="USD{WORK}/input" USD export TUF_REPO="USD{WORK}/tuf-repo" USD export TUF_SERVER_POD="USD(oc get pod --selector=app.kubernetes.io/component=tuf --no-headers -o custom-columns=":metadata.name")" Create a temporary TUF directory structure: Example USD mkdir -p "USD{WORK}/root/" "USD{KEYDIR}" "USD{INPUT}" "USD{TUF_REPO}" Download the TUF contents to the temporary TUF directory structure: Example USD oc extract --to "USD{KEYDIR}/" secret/tuf-root-keys USD oc cp "USD{TUF_SERVER_POD}:/var/www/html" "USD{TUF_REPO}" USD cp "USD{TUF_REPO}/root.json" "USD{ROOT}" Find the active Fulcio certificate file name. Open the latest target file, for example, 1.targets.json , within the local TUF repository. In this file you will find the active Fulcio certificate file name, for example, fulcio_v1.crt.pem . Set an environment variable with this active Fulcio certificate file name: Example USD export ACTIVE_CERT_NAME=fulcio_v1.crt.pem Extract the active Fulico certificate from OpenShift: Example USD oc get secret USD(oc get fulcio securesign-sample -o jsonpath='{.status.certificate.caRef.name}') -o jsonpath='{.data.cert}' | base64 -d > USDACTIVE_CERT_NAME Expire the old certificate: Example USD tuftool rhtas \ --root "USD{ROOT}" \ --key "USD{KEYDIR}/snapshot.pem" \ --key "USD{KEYDIR}/targets.pem" \ --key "USD{KEYDIR}/timestamp.pem" \ --set-fulcio-target "USDACTIVE_CERT_NAME" \ --fulcio-uri "https://fulcio.rhtas" \ --fulcio-status "Expired" \ --outdir "USD{TUF_REPO}" \ --metadata-url "file://USD{TUF_REPO}" Add the new Fulcio certificate: Example USD tuftool rhtas \ --root "USD{ROOT}" \ --key "USD{KEYDIR}/snapshot.pem" \ --key "USD{KEYDIR}/targets.pem" \ --key "USD{KEYDIR}/timestamp.pem" \ --set-fulcio-target "new-fulcio.cert.pem" \ --fulcio-uri "https://fulcio.rhtas" \ --outdir "USD{TUF_REPO}" \ --metadata-url "file://USD{TUF_REPO}" Upload these changes to the TUF server: Example USD oc rsync "USD{TUF_REPO}/" "USD{TUF_SERVER_POD}:/var/www/html" Delete the working directory: Example USD rm -r USDWORK Update the Securesign Fulcio configuration: Example USD read -r -d '' SECURESIGN_PATCH <<EOF [ { "op": "replace", "path": "/spec/fulcio/certificate/privateKeyRef", "value": {"name": "fulcio-config", "key": "private"} }, { "op": "replace", "path": "/spec/fulcio/certificate/privateKeyPasswordRef", "value": {"name": "fulcio-config", "key": "password"} }, { "op": "replace", "path": "/spec/fulcio/certificate/caRef", "value": {"name": "fulcio-config", "key": "cert"} }, { "op": "replace", "path": "/spec/ctlog/rootCertificates", "value": [{"name": "fulcio-config", "key": "cert"}] } ] EOF Patch the Securesign instance: Example USD oc patch Securesign securesign-sample --type='json' -p="USDSECURESIGN_PATCH" Wait for the Fulcio server to redeploy: Example USD oc wait pod -l app.kubernetes.io/name=fulcio-server --for=condition=Ready USD oc wait pod -l app.kubernetes.io/name=ctlog --for=condition=Ready Update the cosign configuration with the updated TUF configuration: Example USD cosign initialize --mirror=USDTUF_URL --root=USDTUF_URL/root.json Now, you are ready to sign and verify your artifacts with the new Fulcio certificate. 4.4. Rotating the Timestamp Authority signer key and certificate chain You can proactively rotate the Timestamp Authority (TSA) signer key and certificate chain. This procedure walks you through expiring your old TSA signer key and certificate chain, and replacing them with a new ones for Red Hat Trusted Artifact Signer (RHTAS) to use. Expiring your old TSA signer key and certificate chain still allows you to verify artifacts signed by the old key and certificate chain. Prerequisites Installation of the RHTAS operator running on Red Hat OpenShift Container Platform. A running Securesign instance. A workstation with the oc and openssl binaries installed. Procedure Download the tuftool binary from the OpenShift cluster to your workstation. Important The tuftool binary is only available for Linux operating systems. From the home page, click the ? icon, click Command line tools , go to the tuftool download section, and click the link for your platform. Open a terminal on your workstation, decompress the binary .gz file, and set the execute bit: Example USD gunzip tuftool-amd64.gz USD chmod +x tuftool-amd64 Move and rename the binary to a location within your USDPATH environment: Example USD sudo mv tuftool-amd64 /usr/local/bin/tuftool Log in to OpenShift from the command line: Syntax oc login --token= TOKEN --server= SERVER_URL_AND_PORT Example USD oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443 Note You can find your login token and URL for use on the command line from the OpenShift web console. Log in to the OpenShift web console. Click your user name, and click Copy login command . Offer your user name and password again, if asked, and click Display Token to view the command. Switch to the RHTAS project: Example USD oc project trusted-artifact-signer Generate a new certificate chain, and a new signer key. Important The new certificate and keys must have unique file names. Create a temporary working directory: Example USD mkdir certs && cd certs Create the root certificate authority (CA) private key, and set a password: Example USD openssl req -x509 -newkey rsa:2048 -days 365 -sha256 -nodes \ -keyout rootCA.key.pem -out rootCA.crt.pem \ -passout pass:" CHANGE_ME " \ -subj "/C=CC/ST=state/L=Locality/O=RH/OU=RootCA/CN=RootCA" \ -addext "basicConstraints=CA:true" -addext "keyUsage=cRLSign, keyCertSign" Replace CHANGE_ME with a new password. Create the intermediate CA private key and certificate signing request (CSR), and set a password: Example USD openssl req -newkey rsa:2048 -sha256 \ -keyout intermediateCA.key.pem -out intermediateCA.csr.pem \ -passout pass:" CHANGE_ME " \ -subj "/C=CC/ST=state/L=Locality/O=RH/OU=IntermediateCA/CN=IntermediateCA" Replace CHANGE_ME with a new password. Sign the intermediate CA certificate with the root CA: Example USD openssl x509 -req -in intermediateCA.csr.pem -CA rootCA.crt.pem -CAkey rootCA.key.pem \ -CAcreateserial -out intermediateCA.crt.pem -days 365 -sha256 \ -extfile <(echo -e "basicConstraints=CA:true\nkeyUsage=cRLSign, keyCertSign\nextendedKeyUsage=critical,timeStamping") \ -passin pass:" CHANGE_ME " Replace CHANGE_ME with the root CA private key password to sign the intermediate CA certificate. Create the leaf CA private key and CSR, and set a password: Example USD openssl req -newkey rsa:2048 -sha256 \ -keyout leafCA.key.pem -out leafCA.csr.pem \ -passout pass:" CHANGE_ME " \ -subj "/C=CC/ST=state/L=Locality/O=RH/OU=LeafCA/CN=LeafCA" Sign the leaf CA certificate with the intermediate CA: Example USD openssl x509 -req -in leafCA.csr.pem -CA intermediateCA.crt.pem -CAkey intermediateCA.key.pem \ -CAcreateserial -out leafCA.crt.pem -days 365 -sha256 \ -extfile <(echo -e "basicConstraints=CA:false\nkeyUsage=cRLSign, keyCertSign\nextendedKeyUsage=critical,timeStamping") \ -passin pass:" CHANGE_ME " Replace CHANGE_ME with the intermediate CA private key password to sign the leaf CA certificate. Create the certificate chain by combining the newly created certificates: Example USD cat leafCA.crt.pem intermediateCA.crt.pem rootCA.crt.pem > new-cert-chain.pem Create a new secret resource with the signer key: Example USD oc create secret generic rotated-signer-key --from-file=rotated-signer-key=certs/leafCA.key.pem Create a new secret resource with the new certificate chain: Example USD oc create secret generic rotated-cert-chain --from-file=rotated-cert-chain=certs/new-cert-chain.pem Create a new secret resource with for the password: Example USD oc create secret generic rotated-password --from-literal=rotated-password= CHANGE_ME Replace CHANGE_ME with the intermediate CA private key password. Find your active TSA certificate file name, the TSA URL string, and configure your shell environment with these values: Example USD export ACTIVE_CERT_CHAIN_NAME=tsa.certchain.pem USD export TSA_URL=USD(oc get timestampauthority securesign-sample -o jsonpath='{.status.url}')/api/v1/timestamp USD curl USDTSA_URL/certchain -o USDACTIVE_CERT_CHAIN_NAME Update the Securesign TSA configuration: Example USD read -r -d '' SECURESIGN_PATCH <<EOF [ { "op": "replace", "path": "/spec/tsa/signer/certificateChain", "value": { "certificateChainRef" : {"name": "rotated-cert-chain", "key": "rotated-cert-chain"} } }, { "op": "replace", "path": "/spec/tsa/signer/file", "value": { "privateKeyRef": {"name": "rotated-signer-key", "key": "rotated-signer-key"}, "passwordRef": {"name": "rotated-password", "key": "rotated-password"} } } ] EOF Patch the Securesign instance: Example USD oc patch Securesign securesign-sample --type='json' -p="USDSECURESIGN_PATCH" Wait for the TSA server to redeploy with the new signer key and certificate chain: Example USD oc get pods -w -l app.kubernetes.io/name=tsa-server Get the new certificate chain: Example USD export NEW_CERT_CHAIN_NAME=new_tsa.certchain.pem USD curl USDTSA_URL/certchain -o USDNEW_CERT_CHAIN_NAME Configure The Update Framework (TUF) service to use the new TSA certificate chain. Set up your shell environment: Example USD export WORK="USD{HOME}/trustroot-example" USD export ROOT="USD{WORK}/root/root.json" USD export KEYDIR="USD{WORK}/keys" USD export INPUT="USD{WORK}/input" USD export TUF_REPO="USD{WORK}/tuf-repo" USD export TUF_SERVER_POD="USD(oc get pod --selector=app.kubernetes.io/component=tuf --no-headers -o custom-columns=":metadata.name")" Create a temporary TUF directory structure: Example USD mkdir -p "USD{WORK}/root/" "USD{KEYDIR}" "USD{INPUT}" "USD{TUF_REPO}" Download the TUF contents to the temporary TUF directory structure: Example USD oc extract --to "USD{KEYDIR}/" secret/tuf-root-keys USD oc cp "USD{TUF_SERVER_POD}:/var/www/html" "USD{TUF_REPO}" USD cp "USD{TUF_REPO}/root.json" "USD{ROOT}" Expire the old TSA certificate: Example USD tuftool rhtas \ --root "USD{ROOT}" \ --key "USD{KEYDIR}/snapshot.pem" \ --key "USD{KEYDIR}/targets.pem" \ --key "USD{KEYDIR}/timestamp.pem" \ --set-tsa-target "USDACTIVE_CERT_CHAIN_NAME" \ --tsa-uri "USDTSA_URL" \ --tsa-status "Expired" \ --outdir "USD{TUF_REPO}" \ --metadata-url "file://USD{TUF_REPO}" Add the new TSA certificate: Example USD tuftool rhtas \ --root "USD{ROOT}" \ --key "USD{KEYDIR}/snapshot.pem" \ --key "USD{KEYDIR}/targets.pem" \ --key "USD{KEYDIR}/timestamp.pem" \ --set-tsa-target "USDNEW_CERT_CHAIN_NAME" \ --tsa-uri "USDTSA_URL" \ --outdir "USD{TUF_REPO}" \ --metadata-url "file://USD{TUF_REPO}" Upload these changes to the TUF server: Example USD oc rsync "USD{TUF_REPO}/" "USD{TUF_SERVER_POD}:/var/www/html" Delete the working directory: Example USD rm -r USDWORK Update the cosign configuration with the updated TUF configuration: Example USD cosign initialize --mirror=USDTUF_URL --root=USDTUF_URL/root.json Now, you are ready to sign and verify your artifacts that uses the new TSA signer key, and certificate.
[ "gunzip rekor-cli-amd64.gz chmod +x rekor-cli-amd64", "sudo mv rekor-cli-amd64 /usr/local/bin/rekor-cli", "gunzip tuftool-amd64.gz chmod +x tuftool-amd64", "sudo mv tuftool-amd64 /usr/local/bin/tuftool", "login --token= TOKEN --server= SERVER_URL_AND_PORT", "oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443", "oc project trusted-artifact-signer", "export REKOR_URL=USD(oc get rekor -o jsonpath='{.items[0].status.url}')", "export OLD_TREE_ID=USD(rekor-cli loginfo --rekor_server USDREKOR_URL --format json | jq -r .TreeID)", "oc run --image registry.redhat.io/rhtas/updatetree-rhel9:1.1.0 --restart=Never --attach=true --rm=true -q -- updatetree --admin_server=trillian-logserver:8091 --tree_id=USD{OLD_TREE_ID} --tree_state=DRAINING", "oc run --image registry.redhat.io/rhtas/updatetree-rhel9:1.1.0 --restart=Never --attach=true --rm=true -q -- updatetree --admin_server=trillian-logserver:8091 --tree_id=USD{OLD_TREE_ID} --tree_state=FROZEN", "export OLD_SHARD_LENGTH=USD(rekor-cli loginfo --rekor_server USDREKOR_URL --format json | jq -r .ActiveTreeSize)", "export OLD_PUBLIC_KEY=USD(curl -s USDREKOR_URL/api/v1/log/publicKey | base64 | tr -d '\\n')", "export NEW_TREE_ID=USD(oc run createtree --image registry.redhat.io/rhtas/createtree-rhel9:1.1.0 --restart=Never --attach=true --rm=true -q -- -logtostderr=false --admin_server=trillian-logserver:8091 --display_name=rekor-tree)", "openssl ecparam -genkey -name secp384r1 -noout -out new-rekor.pem", "oc create secret generic rekor-signer-key --from-file=private=new-rekor.pem", "read -r -d '' SECURESIGN_PATCH_1 <<EOF [ { \"op\": \"replace\", \"path\": \"/spec/rekor/treeID\", \"value\": USDNEW_TREE_ID }, { \"op\": \"add\", \"path\": \"/spec/rekor/sharding/-\", \"value\": { \"treeID\": USDOLD_TREE_ID, \"treeLength\": USDOLD_SHARD_LENGTH, \"encodedPublicKey\": \"USDOLD_PUBLIC_KEY\" } }, { \"op\": \"replace\", \"path\": \"/spec/rekor/signer/keyRef\", \"value\": {\"name\": \"rekor-signer-key\", \"key\": \"private\"} } ] EOF", "read -r -d '' SECURESIGN_PATCH_2 <<EOF [ { \"op\": \"remove\", \"path\": \"/spec/rekor/signer/keyPasswordRef\" } ] EOF", "oc patch Securesign securesign-sample --type='json' -p=\"USDSECURESIGN_PATCH_1\"", "oc wait pod -l app.kubernetes.io/name=rekor-server --for=condition=Ready", "export NEW_KEY_NAME=new-rekor.pub curl USD(oc get rekor -o jsonpath='{.items[0].status.url}')/api/v1/log/publicKey -o USDNEW_KEY_NAME", "export WORK=\"USD{HOME}/trustroot-example\" export ROOT=\"USD{WORK}/root/root.json\" export KEYDIR=\"USD{WORK}/keys\" export INPUT=\"USD{WORK}/input\" export TUF_REPO=\"USD{WORK}/tuf-repo\" export TUF_SERVER_POD=\"USD(oc get pod --selector=app.kubernetes.io/component=tuf --no-headers -o custom-columns=\":metadata.name\")\"", "mkdir -p \"USD{WORK}/root/\" \"USD{KEYDIR}\" \"USD{INPUT}\" \"USD{TUF_REPO}\"", "oc extract --to \"USD{KEYDIR}/\" secret/tuf-root-keys oc cp \"USD{TUF_SERVER_POD}:/var/www/html\" \"USD{TUF_REPO}\" cp \"USD{TUF_REPO}/root.json\" \"USD{ROOT}\"", "export ACTIVE_KEY_NAME=rekor.pub", "echo USDOLD_PUBLIC_KEY | base64 -d > USDACTIVE_KEY_NAME", "tuftool rhtas --root \"USD{ROOT}\" --key \"USD{KEYDIR}/snapshot.pem\" --key \"USD{KEYDIR}/targets.pem\" --key \"USD{KEYDIR}/timestamp.pem\" --set-rekor-target \"USD{ACTIVE_KEY_NAME}\" --rekor-uri \"https://rekor.rhtas\" --rekor-status \"Expired\" --outdir \"USD{TUF_REPO}\" --metadata-url \"file://USD{TUF_REPO}\"", "tuftool rhtas --root \"USD{ROOT}\" --key \"USD{KEYDIR}/snapshot.pem\" --key \"USD{KEYDIR}/targets.pem\" --key \"USD{KEYDIR}/timestamp.pem\" --set-rekor-target \"USD{NEW_KEY_NAME}\" --rekor-uri \"https://rekor.rhtas\" --outdir \"USD{TUF_REPO}\" --metadata-url \"file://USD{TUF_REPO}\"", "oc rsync \"USD{TUF_REPO}/\" \"USD{TUF_SERVER_POD}:/var/www/html\"", "rm -r USDWORK", "cosign initialize --mirror=USDTUF_URL --root=USDTUF_URL/root.json", "gunzip tuftool-amd64.gz chmod +x tuftool-amd64", "sudo mv tuftool-amd64 /usr/local/bin/tuftool", "login --token= TOKEN --server= SERVER_URL_AND_PORT", "oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443", "oc project trusted-artifact-signer", "export SERVER_CONFIG_NAME=USD(oc get ctlog -o jsonpath='{.items[0].status.serverConfigRef.name}') oc get secret USDSERVER_CONFIG_NAME -o jsonpath=\"{.data.config}\" | base64 --decode > config.txtpb oc get secret USDSERVER_CONFIG_NAME -o jsonpath=\"{.data.fulcio-0}\" | base64 --decode > fulcio-0.pem oc get secret USDSERVER_CONFIG_NAME -o jsonpath=\"{.data.private}\" | base64 --decode > private.pem oc get secret USDSERVER_CONFIG_NAME -o jsonpath=\"{.data.public}\" | base64 --decode > public.pem", "export OLD_TREE_ID=USD(oc get ctlog -o jsonpath='{.items[0].status.treeID}')", "oc run --image registry.redhat.io/rhtas/updatetree-rhel9:1.1.0 --restart=Never --attach=true --rm=true -q -- updatetree --admin_server=trillian-logserver:8091 --tree_id=USD{OLD_TREE_ID} --tree_state=DRAINING", "oc run --image registry.redhat.io/rhtas/updatetree-rhel9:1.1.0 --restart=Never --attach=true --rm=true -q -- updatetree --admin_server=trillian-logserver:8091 --tree_id=USD{OLD_TREE_ID} --tree_state=FROZEN", "export NEW_TREE_ID=USD(kubectl run createtree --image registry.redhat.io/rhtas/createtree-rhel9:1.1.0 --restart=Never --attach=true --rm=true -q -- -logtostderr=false --admin_server=trillian-logserver:8091 --display_name=ctlog-tree)", "openssl ecparam -genkey -name prime256v1 -noout -out new-ctlog.pem openssl ec -in new-ctlog.pem -pubout -out new-ctlog-public.pem openssl ec -in new-ctlog.pem -out new-ctlog.pass.pem -des3 -passout pass:\" CHANGE_ME \"", "log_configs:{ # frozen log config:{ log_id:2066075212146181968 prefix:\"trusted-artifact-signer-0\" roots_pem_file:\"/ctfe-keys/fulcio-0\" private_key:{[type.googleapis.com/keyspb.PEMKeyFile]:{path:\"/ctfe-keys/private-0\" password:\"Example123\"}} public_key:{der:\"0Y0\\x13\\x06\\x07*\\x86H\\xce=\\x02\\x01\\x06\\x08*\\x86H\\xce=\\x03\\x01\\x07\\x03B\\x00\\x04)'.\\xffUJ\\xe2s)\\xefR\\x8a\\xfcO\\xdcewty\\xa7\\x9d<\\x13\\xb0\\x1c\\x99\\x96\\xe4'\\xe3v\\x07:\\xc8I+\\x08J\\x9d\\x8a\\xed\\x06\\xe4\\xaeI:q\\x98\\xf4\\xbc<o4VD\\x0cr\\xf9\\x9c\\xecxT\\x84\"} not_after_limit :{seconds:1728056285 nanos:012111000} ext_key_usages:\"CodeSigning\" log_backend_name:\"trillian\" }", "log_configs:{ # frozen log # new active log config:{ log_id: NEW_TREE_ID prefix:\"trusted-artifact-signer\" roots_pem_file:\"/ctfe-keys/fulcio-0\" private_key:{[type.googleapis.com/keyspb.PEMKeyFile]:{path:\"ctfe-keys/private\" password:\" CHANGE_ME \"}} ext_key_usages:\"CodeSigning\" not_after_start:{seconds:1713201754 nanos:155663000} log_backend_name:\"trillian\" }", "oc create secret generic ctlog-config --from-file=config=config.txtpb --from-file=private=new-ctlog.pass.pem --from-file=public=new-ctlog-public.pem --from-file=fulcio-0=fulcio-0.pem --from-file=private-0=private.pem --from-file=public-0=public.pem --from-literal=password= CHANGE_ME", "export WORK=\"USD{HOME}/trustroot-example\" export ROOT=\"USD{WORK}/root/root.json\" export KEYDIR=\"USD{WORK}/keys\" export INPUT=\"USD{WORK}/input\" export TUF_REPO=\"USD{WORK}/tuf-repo\" export TUF_SERVER_POD=\"USD(oc get pod --selector=app.kubernetes.io/component=tuf --no-headers -o custom-columns=\":metadata.name\")\"", "mkdir -p \"USD{WORK}/root/\" \"USD{KEYDIR}\" \"USD{INPUT}\" \"USD{TUF_REPO}\"", "oc extract --to \"USD{KEYDIR}/\" secret/tuf-root-keys oc cp \"USD{TUF_SERVER_POD}:/var/www/html\" \"USD{TUF_REPO}\" cp \"USD{TUF_REPO}/root.json\" \"USD{ROOT}\"", "export ACTIVE_CTFE_NAME=ctfe.pub", "oc get secret USD(oc get ctlog securesign-sample -o jsonpath='{.status.publicKeyRef.name}') -o jsonpath='{.data.public}' | base64 -d > USDACTIVE_CTFE_NAME", "tuftool rhtas --root \"USD{ROOT}\" --key \"USD{KEYDIR}/snapshot.pem\" --key \"USD{KEYDIR}/targets.pem\" --key \"USD{KEYDIR}/timestamp.pem\" --set-ctlog-target \"USDACTIVE_CTFE_NAME\" --ctlog-uri \"https://ctlog.rhtas\" --ctlog-status \"Expired\" --outdir \"USD{TUF_REPO}\" --metadata-url \"file://USD{TUF_REPO}\"", "tuftool rhtas --root \"USD{ROOT}\" --key \"USD{KEYDIR}/snapshot.pem\" --key \"USD{KEYDIR}/targets.pem\" --key \"USD{KEYDIR}/timestamp.pem\" --set-ctlog-target \"new-ctlog-public.pem\" --ctlog-uri \"https://ctlog.rhtas\" --outdir \"USD{TUF_REPO}\" --metadata-url \"file://USD{TUF_REPO}\"", "oc rsync \"USD{TUF_REPO}/\" \"USD{TUF_SERVER_POD}:/var/www/html\"", "rm -r USDWORK", "read -r -d '' SECURESIGN_PATCH <<EOF [ { \"op\": \"replace\", \"path\": \"/spec/ctlog/serverConfigRef\", \"value\": {\"name\": \"ctlog-config\"} }, { \"op\": \"replace\", \"path\": \"/spec/ctlog/treeID\", \"value\": USDNEW_TREE_ID }, { \"op\": \"replace\", \"path\": \"/spec/ctlog/privateKeyRef\", \"value\": {\"name\": \"ctlog-config\", \"key\": \"private\"} }, { \"op\": \"replace\", \"path\": \"/spec/ctlog/privateKeyPasswordRef\", \"value\": {\"name\": \"ctlog-config\", \"key\": \"password\"} }, { \"op\": \"replace\", \"path\": \"/spec/ctlog/publicKeyRef\", \"value\": {\"name\": \"ctlog-config\", \"key\": \"public\"} } ] EOF", "oc patch Securesign securesign-sample --type='json' -p=\"USDSECURESIGN_PATCH\"", "oc wait pod -l app.kubernetes.io/name=ctlog --for=condition=Ready", "cosign initialize --mirror=USDTUF_URL --root=USDTUF_URL/root.json", "gunzip tuftool-amd64.gz chmod +x tuftool-amd64", "sudo mv tuftool-amd64 /usr/local/bin/tuftool", "login --token= TOKEN --server= SERVER_URL_AND_PORT", "oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443", "oc project trusted-artifact-signer", "openssl ecparam -genkey -name prime256v1 -noout -out new-fulcio.pem openssl ec -in new-fulcio.pem -pubout -out new-fulcio-public.pem openssl ec -in new-fulcio.pem -out new-fulcio.pass.pem -des3 -passout pass:\" CHANGE_ME \" openssl req -new -x509 -key new-fulcio.pass.pem -out new-fulcio.cert.pem", "oc create secret generic fulcio-config --from-file=private=new-fulcio.pass.pem --from-file=cert=new-fulcio.cert.pem --from-literal=password= CHANGE_ME", "export WORK=\"USD{HOME}/trustroot-example\" export ROOT=\"USD{WORK}/root/root.json\" export KEYDIR=\"USD{WORK}/keys\" export INPUT=\"USD{WORK}/input\" export TUF_REPO=\"USD{WORK}/tuf-repo\" export TUF_SERVER_POD=\"USD(oc get pod --selector=app.kubernetes.io/component=tuf --no-headers -o custom-columns=\":metadata.name\")\"", "mkdir -p \"USD{WORK}/root/\" \"USD{KEYDIR}\" \"USD{INPUT}\" \"USD{TUF_REPO}\"", "oc extract --to \"USD{KEYDIR}/\" secret/tuf-root-keys oc cp \"USD{TUF_SERVER_POD}:/var/www/html\" \"USD{TUF_REPO}\" cp \"USD{TUF_REPO}/root.json\" \"USD{ROOT}\"", "export ACTIVE_CERT_NAME=fulcio_v1.crt.pem", "oc get secret USD(oc get fulcio securesign-sample -o jsonpath='{.status.certificate.caRef.name}') -o jsonpath='{.data.cert}' | base64 -d > USDACTIVE_CERT_NAME", "tuftool rhtas --root \"USD{ROOT}\" --key \"USD{KEYDIR}/snapshot.pem\" --key \"USD{KEYDIR}/targets.pem\" --key \"USD{KEYDIR}/timestamp.pem\" --set-fulcio-target \"USDACTIVE_CERT_NAME\" --fulcio-uri \"https://fulcio.rhtas\" --fulcio-status \"Expired\" --outdir \"USD{TUF_REPO}\" --metadata-url \"file://USD{TUF_REPO}\"", "tuftool rhtas --root \"USD{ROOT}\" --key \"USD{KEYDIR}/snapshot.pem\" --key \"USD{KEYDIR}/targets.pem\" --key \"USD{KEYDIR}/timestamp.pem\" --set-fulcio-target \"new-fulcio.cert.pem\" --fulcio-uri \"https://fulcio.rhtas\" --outdir \"USD{TUF_REPO}\" --metadata-url \"file://USD{TUF_REPO}\"", "oc rsync \"USD{TUF_REPO}/\" \"USD{TUF_SERVER_POD}:/var/www/html\"", "rm -r USDWORK", "read -r -d '' SECURESIGN_PATCH <<EOF [ { \"op\": \"replace\", \"path\": \"/spec/fulcio/certificate/privateKeyRef\", \"value\": {\"name\": \"fulcio-config\", \"key\": \"private\"} }, { \"op\": \"replace\", \"path\": \"/spec/fulcio/certificate/privateKeyPasswordRef\", \"value\": {\"name\": \"fulcio-config\", \"key\": \"password\"} }, { \"op\": \"replace\", \"path\": \"/spec/fulcio/certificate/caRef\", \"value\": {\"name\": \"fulcio-config\", \"key\": \"cert\"} }, { \"op\": \"replace\", \"path\": \"/spec/ctlog/rootCertificates\", \"value\": [{\"name\": \"fulcio-config\", \"key\": \"cert\"}] } ] EOF", "oc patch Securesign securesign-sample --type='json' -p=\"USDSECURESIGN_PATCH\"", "oc wait pod -l app.kubernetes.io/name=fulcio-server --for=condition=Ready oc wait pod -l app.kubernetes.io/name=ctlog --for=condition=Ready", "cosign initialize --mirror=USDTUF_URL --root=USDTUF_URL/root.json", "gunzip tuftool-amd64.gz chmod +x tuftool-amd64", "sudo mv tuftool-amd64 /usr/local/bin/tuftool", "login --token= TOKEN --server= SERVER_URL_AND_PORT", "oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443", "oc project trusted-artifact-signer", "mkdir certs && cd certs", "openssl req -x509 -newkey rsa:2048 -days 365 -sha256 -nodes -keyout rootCA.key.pem -out rootCA.crt.pem -passout pass:\" CHANGE_ME \" -subj \"/C=CC/ST=state/L=Locality/O=RH/OU=RootCA/CN=RootCA\" -addext \"basicConstraints=CA:true\" -addext \"keyUsage=cRLSign, keyCertSign\"", "openssl req -newkey rsa:2048 -sha256 -keyout intermediateCA.key.pem -out intermediateCA.csr.pem -passout pass:\" CHANGE_ME \" -subj \"/C=CC/ST=state/L=Locality/O=RH/OU=IntermediateCA/CN=IntermediateCA\"", "openssl x509 -req -in intermediateCA.csr.pem -CA rootCA.crt.pem -CAkey rootCA.key.pem -CAcreateserial -out intermediateCA.crt.pem -days 365 -sha256 -extfile <(echo -e \"basicConstraints=CA:true\\nkeyUsage=cRLSign, keyCertSign\\nextendedKeyUsage=critical,timeStamping\") -passin pass:\" CHANGE_ME \"", "openssl req -newkey rsa:2048 -sha256 -keyout leafCA.key.pem -out leafCA.csr.pem -passout pass:\" CHANGE_ME \" -subj \"/C=CC/ST=state/L=Locality/O=RH/OU=LeafCA/CN=LeafCA\"", "openssl x509 -req -in leafCA.csr.pem -CA intermediateCA.crt.pem -CAkey intermediateCA.key.pem -CAcreateserial -out leafCA.crt.pem -days 365 -sha256 -extfile <(echo -e \"basicConstraints=CA:false\\nkeyUsage=cRLSign, keyCertSign\\nextendedKeyUsage=critical,timeStamping\") -passin pass:\" CHANGE_ME \"", "cat leafCA.crt.pem intermediateCA.crt.pem rootCA.crt.pem > new-cert-chain.pem", "oc create secret generic rotated-signer-key --from-file=rotated-signer-key=certs/leafCA.key.pem", "oc create secret generic rotated-cert-chain --from-file=rotated-cert-chain=certs/new-cert-chain.pem", "oc create secret generic rotated-password --from-literal=rotated-password= CHANGE_ME", "export ACTIVE_CERT_CHAIN_NAME=tsa.certchain.pem export TSA_URL=USD(oc get timestampauthority securesign-sample -o jsonpath='{.status.url}')/api/v1/timestamp curl USDTSA_URL/certchain -o USDACTIVE_CERT_CHAIN_NAME", "read -r -d '' SECURESIGN_PATCH <<EOF [ { \"op\": \"replace\", \"path\": \"/spec/tsa/signer/certificateChain\", \"value\": { \"certificateChainRef\" : {\"name\": \"rotated-cert-chain\", \"key\": \"rotated-cert-chain\"} } }, { \"op\": \"replace\", \"path\": \"/spec/tsa/signer/file\", \"value\": { \"privateKeyRef\": {\"name\": \"rotated-signer-key\", \"key\": \"rotated-signer-key\"}, \"passwordRef\": {\"name\": \"rotated-password\", \"key\": \"rotated-password\"} } } ] EOF", "oc patch Securesign securesign-sample --type='json' -p=\"USDSECURESIGN_PATCH\"", "oc get pods -w -l app.kubernetes.io/name=tsa-server", "export NEW_CERT_CHAIN_NAME=new_tsa.certchain.pem curl USDTSA_URL/certchain -o USDNEW_CERT_CHAIN_NAME", "export WORK=\"USD{HOME}/trustroot-example\" export ROOT=\"USD{WORK}/root/root.json\" export KEYDIR=\"USD{WORK}/keys\" export INPUT=\"USD{WORK}/input\" export TUF_REPO=\"USD{WORK}/tuf-repo\" export TUF_SERVER_POD=\"USD(oc get pod --selector=app.kubernetes.io/component=tuf --no-headers -o custom-columns=\":metadata.name\")\"", "mkdir -p \"USD{WORK}/root/\" \"USD{KEYDIR}\" \"USD{INPUT}\" \"USD{TUF_REPO}\"", "oc extract --to \"USD{KEYDIR}/\" secret/tuf-root-keys oc cp \"USD{TUF_SERVER_POD}:/var/www/html\" \"USD{TUF_REPO}\" cp \"USD{TUF_REPO}/root.json\" \"USD{ROOT}\"", "tuftool rhtas --root \"USD{ROOT}\" --key \"USD{KEYDIR}/snapshot.pem\" --key \"USD{KEYDIR}/targets.pem\" --key \"USD{KEYDIR}/timestamp.pem\" --set-tsa-target \"USDACTIVE_CERT_CHAIN_NAME\" --tsa-uri \"USDTSA_URL\" --tsa-status \"Expired\" --outdir \"USD{TUF_REPO}\" --metadata-url \"file://USD{TUF_REPO}\"", "tuftool rhtas --root \"USD{ROOT}\" --key \"USD{KEYDIR}/snapshot.pem\" --key \"USD{KEYDIR}/targets.pem\" --key \"USD{KEYDIR}/timestamp.pem\" --set-tsa-target \"USDNEW_CERT_CHAIN_NAME\" --tsa-uri \"USDTSA_URL\" --outdir \"USD{TUF_REPO}\" --metadata-url \"file://USD{TUF_REPO}\"", "oc rsync \"USD{TUF_REPO}/\" \"USD{TUF_SERVER_POD}:/var/www/html\"", "rm -r USDWORK", "cosign initialize --mirror=USDTUF_URL --root=USDTUF_URL/root.json" ]
https://docs.redhat.com/en/documentation/red_hat_trusted_artifact_signer/1/html/administration_guide/rotate-your-certificates-and-keys
11. Development and Tools
11. Development and Tools 11.1. Technology Previews libdfp An updated libdfp library is available in Red Hat Enterprise Linux 6. libdfp is a decimal floating point math library, and is available as an alternative to the glibc math functions on Power and s390x architectures, and is available in the supplementary channels. Eclipse Plugins The following plugins for the Eclipse software development environment are considered to be Technology Previews in this pre-release version of Red Hat Enterprise Linux 6 The Mylyn plugin for the Eclipse task management subsystem the eclipse-callgraph C/C++ Call Graph Visualization plugin
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/ar01s11
Validation and troubleshooting
Validation and troubleshooting OpenShift Container Platform 4.14 Validating and troubleshooting an OpenShift Container Platform installation Red Hat OpenShift Documentation Team
[ "cat <install_dir>/.openshift_install.log", "time=\"2020-12-03T09:50:47Z\" level=info msg=\"Install complete!\" time=\"2020-12-03T09:50:47Z\" level=info msg=\"To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'\" time=\"2020-12-03T09:50:47Z\" level=info msg=\"Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com\" time=\"2020-12-03T09:50:47Z\" level=info msg=\"Login to the console with user: \\\"kubeadmin\\\", and password: \\\"password\\\"\" time=\"2020-12-03T09:50:47Z\" level=debug msg=\"Time elapsed per stage:\" time=\"2020-12-03T09:50:47Z\" level=debug msg=\" Infrastructure: 6m45s\" time=\"2020-12-03T09:50:47Z\" level=debug msg=\"Bootstrap Complete: 11m30s\" time=\"2020-12-03T09:50:47Z\" level=debug msg=\" Bootstrap Destroy: 1m5s\" time=\"2020-12-03T09:50:47Z\" level=debug msg=\" Cluster Operators: 17m31s\" time=\"2020-12-03T09:50:47Z\" level=info msg=\"Time elapsed: 37m26s\"", "oc adm node-logs <node_name> -u crio", "Mar 17 02:52:50 ip-10-0-138-140.ec2.internal crio[1366]: time=\"2021-08-05 10:33:21.594930907Z\" level=info msg=\"Pulling image: quay.io/openshift-release-dev/ocp-release:4.10.0-ppc64le\" id=abcd713b-d0e1-4844-ac1c-474c5b60c07c name=/runtime.v1alpha2.ImageService/PullImage Mar 17 02:52:50 ip-10-0-138-140.ec2.internal crio[1484]: time=\"2021-03-17 02:52:50.194341109Z\" level=info msg=\"Trying to access \\\"li0317gcp1.mirror-registry.qe.gcp.devcluster.openshift.com:5000/ocp/release@sha256:1926eae7cacb9c00f142ec98b00628970e974284b6ddaf9a6a086cb9af7a6c31\\\"\" Mar 17 02:52:50 ip-10-0-138-140.ec2.internal crio[1484]: time=\"2021-03-17 02:52:50.226788351Z\" level=info msg=\"Trying to access \\\"li0317gcp1.mirror-registry.qe.gcp.devcluster.openshift.com:5000/ocp/release@sha256:1926eae7cacb9c00f142ec98b00628970e974284b6ddaf9a6a086cb9af7a6c31\\\"\"", "Trying to access \\\"li0317gcp1.mirror-registry.qe.gcp.devcluster.openshift.com:5000/ocp/release@sha256:1926eae7cacb9c00f142ec98b00628970e974284b6ddaf9a6a086cb9af7a6c31\\\" Trying to access \\\"li0317gcp2.mirror-registry.qe.gcp.devcluster.openshift.com:5000/ocp/release@sha256:1926eae7cacb9c00f142ec98b00628970e974284b6ddaf9a6a086cb9af7a6c31\\\"", "oc get clusterversion", "NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.6.4 True False 6m25s Cluster version is 4.6.4", "oc get clusteroperators.config.openshift.io", "oc describe clusterversion", "oc get clusterversion -o jsonpath='{.items[0].spec}{\"\\n\"}'", "{\"channel\":\"stable-4.6\",\"clusterID\":\"245539c1-72a3-41aa-9cec-72ed8cf25c5c\"}", "oc adm upgrade", "Cluster version is 4.6.4 Updates: VERSION IMAGE 4.6.6 quay.io/openshift-release-dev/ocp-release@sha256:c7e8f18e8116356701bd23ae3a23fb9892dd5ea66c8300662ef30563d7104f39", "oc get cloudcredentials cluster -o=jsonpath={.spec.credentialsMode}", "Manual", "oc get secrets -n kube-system <secret_name>", "Error from server (NotFound): secrets \"aws-creds\" not found", "oc get authentication cluster -o jsonpath --template='{ .spec.serviceAccountIssuer }'", "oc get secrets -n openshift-image-registry installer-cloud-credentials -o jsonpath='{.data}'", "oc get pods -n openshift-cloud-credential-operator", "NAME READY STATUS RESTARTS AGE cloud-credential-operator-59cf744f78-r8pbq 2/2 Running 2 71m pod-identity-webhook-548f977b4c-859lz 1/1 Running 1 70m", "oc get nodes", "NAME STATUS ROLES AGE VERSION compute-1.example.com Ready worker 33m v1.27.3 control-plane-1.example.com Ready master 41m v1.27.3 control-plane-2.example.com Ready master 45m v1.27.3 compute-2.example.com Ready worker 38m v1.27.3 compute-3.example.com Ready worker 33m v1.27.3 control-plane-3.example.com Ready master 41m v1.27.3", "oc adm top nodes", "NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% compute-1.example.com 128m 8% 1132Mi 16% control-plane-1.example.com 801m 22% 3471Mi 23% control-plane-2.example.com 1718m 49% 6085Mi 40% compute-2.example.com 935m 62% 5178Mi 75% compute-3.example.com 111m 7% 1131Mi 16% control-plane-3.example.com 942m 26% 4100Mi 27%", "./openshift-install gather bootstrap --dir <installation_directory> 1", "./openshift-install gather bootstrap --dir <installation_directory> \\ 1 --bootstrap <bootstrap_address> \\ 2 --master <master_1_address> \\ 3 --master <master_2_address> \\ 4 --master <master_3_address> 5", "INFO Pulling debug logs from the bootstrap machine INFO Bootstrap gather logs captured here \"<installation_directory>/log-bundle-<timestamp>.tar.gz\"", "journalctl -b -f -u bootkube.service", "for pod in USD(sudo podman ps -a -q); do sudo podman logs USDpod; done", "tail -f /var/lib/containers/storage/overlay-containers/*/userdata/ctr.log", "journalctl -b -f -u kubelet.service -u crio.service", "sudo tail -f /var/log/containers/*", "oc adm node-logs --role=master -u kubelet", "oc adm node-logs --role=master --path=openshift-apiserver", "cat ~/<installation_directory>/.openshift_install.log 1", "./openshift-install create cluster --dir <installation_directory> --log-level debug 1", "./openshift-install destroy cluster --dir <installation_directory> 1", "rm -rf <installation_directory>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html-single/validation_and_troubleshooting/index
Installing on Nutanix
Installing on Nutanix OpenShift Container Platform 4.16 Installing OpenShift Container Platform on Nutanix Red Hat OpenShift Documentation Team
[ "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl.<rhel_version>\" \\ 1 -a ~/.pull-secret", "chmod 775 ccoctl.<rhel_version>", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.", "oc edit infrastructures.config.openshift.io cluster", "spec: cloudConfig: key: config name: cloud-provider-config # platformSpec: nutanix: failureDomains: - cluster: type: UUID uuid: <uuid> name: <failure_domain_name> subnets: - type: UUID uuid: <network_uuid> - cluster: type: UUID uuid: <uuid> name: <failure_domain_name> subnets: - type: UUID uuid: <network_uuid> - cluster: type: UUID uuid: <uuid> name: <failure_domain_name> subnets: - type: UUID uuid: <network_uuid>", "oc edit controlplanemachineset.machine.openshift.io cluster -n openshift-machine-api", "apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <cluster_name> name: cluster namespace: openshift-machine-api spec: template: machineType: machines_v1beta1_machine_openshift_io machines_v1beta1_machine_openshift_io: failureDomains: platform: Nutanix nutanix: - name: <failure_domain_name_1> - name: <failure_domain_name_2> - name: <failure_domain_name_3>", "oc describe infrastructures.config.openshift.io cluster", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE <machine_set_name_1> 1 1 1 1 55m <machine_set_name_2> 1 1 1 1 55m", "oc edit machineset <machine_set_name_1> -n openshift-machine-api", "apiVersion: machine.openshift.io/v1 kind: MachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <cluster_name> name: <machine_set_name_1> namespace: openshift-machine-api spec: replicas: 2 template: spec: providerSpec: value: apiVersion: machine.openshift.io/v1 failureDomain: name: <failure_domain_name_1> cluster: type: uuid uuid: <prism_element_uuid_1> subnets: - type: uuid uuid: <prism_element_network_uuid_1>", "oc get -n openshift-machine-api machines -l machine.openshift.io/cluster-api-machineset=<machine_set_name_1>", "NAME PHASE TYPE REGION ZONE AGE <machine_name_original_1> Running AHV Unnamed Development-STS 4h <machine_name_original_2> Running AHV Unnamed Development-STS 4h", "oc annotate machine/<machine_name_original_1> -n openshift-machine-api machine.openshift.io/delete-machine=\"true\"", "oc scale --replicas=<twice_the_number_of_replicas> \\ 1 machineset <machine_set_name_1> -n openshift-machine-api", "oc get -n openshift-machine-api machines -l machine.openshift.io/cluster-api-machineset=<machine_set_name_1>", "oc scale --replicas=<original_number_of_replicas> \\ 1 machineset <machine_set_name_1> -n openshift-machine-api", "oc describe infrastructures.config.openshift.io cluster", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE <original_machine_set_name_1> 1 1 1 1 55m <original_machine_set_name_2> 1 1 1 1 55m", "oc get machineset <original_machine_set_name_1> -n openshift-machine-api -o yaml > <new_machine_set_name_1>.yaml", "oc get machineset <original_machine_set_name_1> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "apiVersion: machine.openshift.io/v1 kind: MachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <cluster_name> name: <new_machine_set_name_1> namespace: openshift-machine-api spec: replicas: 2 template: spec: providerSpec: value: apiVersion: machine.openshift.io/v1 failureDomain: name: <failure_domain_name_1> cluster: type: uuid uuid: <prism_element_uuid_1> subnets: - type: uuid uuid: <prism_element_network_uuid_1>", "oc create -f <new_machine_set_name_1>.yaml", "oc get -n openshift-machine-api machines -l machine.openshift.io/cluster-api-machineset=<new_machine_set_name_1>", "NAME PHASE TYPE REGION ZONE AGE <machine_from_new_1> Provisioned AHV Unnamed Development-STS 25s <machine_from_new_2> Provisioning AHV Unnamed Development-STS 25s", "oc delete machineset <original_machine_set_name_1> -n openshift-machine-api", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE <new_machine_set_name_1> 1 1 1 1 4m12s <new_machine_set_name_2> 1 1 1 1 4m12s", "oc get -n openshift-machine-api machines", "NAME PHASE TYPE REGION ZONE AGE <machine_from_new_1> Running AHV Unnamed Development-STS 5m41s <machine_from_new_2> Running AHV Unnamed Development-STS 5m41s <machine_from_original_1> Deleting AHV Unnamed Development-STS 4h <machine_from_original_2> Deleting AHV Unnamed Development-STS 4h", "NAME PHASE TYPE REGION ZONE AGE <machine_from_new_1> Running AHV Unnamed Development-STS 6m30s <machine_from_new_2> Running AHV Unnamed Development-STS 6m30s", "oc describe machine <machine_from_new_1> -n openshift-machine-api", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "cp certs/lin/* /etc/pki/ca-trust/source/anchors", "update-ca-trust extract", "./openshift-install create install-config --dir <installation_directory> 1", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 3 platform: nutanix: 4 cpus: 2 coresPerSocket: 2 memoryMiB: 8196 osDisk: diskSizeGiB: 120 categories: 5 - key: <category_key_name> value: <category_value> controlPlane: 6 hyperthreading: Enabled 7 name: master replicas: 3 platform: nutanix: 8 cpus: 4 coresPerSocket: 2 memoryMiB: 16384 osDisk: diskSizeGiB: 120 categories: 9 - key: <category_key_name> value: <category_value> metadata: creationTimestamp: null name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: nutanix: apiVIPs: - 10.40.142.7 12 defaultMachinePlatform: bootType: Legacy categories: 13 - key: <category_key_name> value: <category_value> project: 14 type: name name: <project_name> ingressVIPs: - 10.40.142.8 15 prismCentral: endpoint: address: your.prismcentral.domainname 16 port: 9440 17 password: <password> 18 username: <username> 19 prismElements: - endpoint: address: your.prismelement.domainname port: 9440 uuid: 0005b0f1-8f43-a0f2-02b7-3cecef193712 subnetUUIDs: - c7938dc6-7659-453e-a688-e26020c68e43 clusterOSImage: http://example.com/images/rhcos-47.83.202103221318-0-nutanix.x86_64.qcow2 20 credentialsMode: Manual publish: External pullSecret: '{\"auths\": ...}' 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23", "apiVersion: v1 baseDomain: example.com compute: platform: nutanix: failureDomains: - name: <failure_domain_name> prismElement: name: <prism_element_name> uuid: <prism_element_uuid> subnetUUIDs: - <network_uuid>", "apiVersion: v1 baseDomain: example.com compute: platform: nutanix: defaultMachinePlatform: failureDomains: - failure-domain-1 - failure-domain-2 - failure-domain-3", "apiVersion: v1 baseDomain: example.com compute: controlPlane: platform: nutanix: failureDomains: - failure-domain-1 - failure-domain-2 - failure-domain-3 compute: platform: nutanix: failureDomains: - failure-domain-1 - failure-domain-2", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "credentials: - type: basic_auth 1 data: prismCentral: 2 username: <username_for_prism_central> password: <password_for_prism_central> prismElements: 3 - name: <name_of_prism_element> username: <username_for_prism_element> password: <password_for_prism_element>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: annotations: include.release.openshift.io/self-managed-high-availability: \"true\" labels: controller-tools.k8s.io: \"1.0\" name: openshift-machine-api-nutanix namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: NutanixProviderSpec secretRef: name: nutanix-credentials namespace: openshift-machine-api", "ccoctl nutanix create-shared-secrets --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 1 --output-dir=<ccoctl_output_dir> \\ 2 --credentials-source-filepath=<path_to_credentials_file> 3", "apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1", "openshift-install create manifests --dir <installation_directory> 1", "cp <ccoctl_output_dir>/manifests/*credentials.yaml ./<installation_directory>/manifests", "ls ./<installation_directory>/manifests", "cluster-config.yaml cluster-dns-02-config.yml cluster-infrastructure-02-config.yml cluster-ingress-02-config.yml cluster-network-01-crd.yml cluster-network-02-config.yml cluster-proxy-01-config.yaml cluster-scheduler-02-config.yml cvo-overrides.yaml kube-cloud-config.yaml kube-system-configmap-root-ca.yaml machine-config-server-tls-secret.yaml openshift-config-secret-pull-secret.yaml openshift-cloud-controller-manager-nutanix-credentials-credentials.yaml openshift-machine-api-nutanix-credentials-credentials.yaml", "cd <path_to_installation_directory>/manifests", "apiVersion: v1 kind: ConfigMap metadata: name: cloud-conf namespace: openshift-cloud-controller-manager data: cloud.conf: \"{ \\\"prismCentral\\\": { \\\"address\\\": \\\"<prism_central_FQDN/IP>\\\", 1 \\\"port\\\": 9440, \\\"credentialRef\\\": { \\\"kind\\\": \\\"Secret\\\", \\\"name\\\": \\\"nutanix-credentials\\\", \\\"namespace\\\": \\\"openshift-cloud-controller-manager\\\" } }, \\\"topologyDiscovery\\\": { \\\"type\\\": \\\"Prism\\\", \\\"topologyCategories\\\": null }, \\\"enableCustomLabeling\\\": true }\"", "spec: cloudConfig: key: config name: cloud-provider-config", "Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10", "Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10", "Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10", "listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.100:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.168.1.101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2", "listen api-server-6443 bind *:6443 mode tcp server master-00 192.168.83.89:6443 check inter 1s server master-01 192.168.84.90:6443 check inter 1s server master-02 192.168.85.99:6443 check inter 1s server bootstrap 192.168.80.89:6443 check inter 1s listen machine-config-server-22623 bind *:22623 mode tcp server master-00 192.168.83.89:22623 check inter 1s server master-01 192.168.84.90:22623 check inter 1s server master-02 192.168.85.99:22623 check inter 1s server bootstrap 192.168.80.89:22623 check inter 1s listen ingress-router-80 bind *:80 mode tcp balance source server worker-00 192.168.83.100:80 check inter 1s server worker-01 192.168.83.101:80 check inter 1s listen ingress-router-443 bind *:443 mode tcp balance source server worker-00 192.168.83.100:443 check inter 1s server worker-01 192.168.83.101:443 check inter 1s listen ironic-api-6385 bind *:6385 mode tcp balance source server master-00 192.168.83.89:6385 check inter 1s server master-01 192.168.84.90:6385 check inter 1s server master-02 192.168.85.99:6385 check inter 1s server bootstrap 192.168.80.89:6385 check inter 1s listen inspector-api-5050 bind *:5050 mode tcp balance source server master-00 192.168.83.89:5050 check inter 1s server master-01 192.168.84.90:5050 check inter 1s server master-02 192.168.85.99:5050 check inter 1s server bootstrap 192.168.80.89:5050 check inter 1s", "curl https://<loadbalancer_ip_address>:6443/version --insecure", "{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }", "curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure", "HTTP/1.1 200 OK Content-Length: 0", "curl -I -L -H \"Host: console-openshift-console.apps.<cluster_name>.<base_domain>\" http://<load_balancer_front_end_IP_address>", "HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache", "curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain>", "HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private", "<load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End", "<load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End", "platform: nutanix: loadBalancer: type: UserManaged 1 apiVIPs: - <api_ip> 2 ingressVIPs: - <ingress_ip> 3", "curl https://api.<cluster_name>.<base_domain>:6443/version --insecure", "{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }", "curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure", "HTTP/1.1 200 OK Content-Length: 0", "curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure", "HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private", "curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure", "HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "cp certs/lin/* /etc/pki/ca-trust/source/anchors", "update-ca-trust extract", "./openshift-install coreos print-stream-json", "\"nutanix\": { \"release\": \"411.86.202210041459-0\", \"formats\": { \"qcow2\": { \"disk\": { \"location\": \"https://rhcos.mirror.openshift.com/art/storage/releases/rhcos-4.11/411.86.202210041459-0/x86_64/rhcos-411.86.202210041459-0-nutanix.x86_64.qcow2\", \"sha256\": \"42e227cac6f11ac37ee8a2f9528bb3665146566890577fd55f9b950949e5a54b\"", "platform: nutanix: clusterOSImage: http://example.com/images/rhcos-411.86.202210041459-0-nutanix.x86_64.qcow2", "./openshift-install create install-config --dir <installation_directory> 1", "platform: nutanix: clusterOSImage: http://mirror.example.com/images/rhcos-47.83.202103221318-0-nutanix.x86_64.qcow2", "pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'", "additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----", "imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release", "publish: Internal", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 3 platform: nutanix: 4 cpus: 2 coresPerSocket: 2 memoryMiB: 8196 osDisk: diskSizeGiB: 120 categories: 5 - key: <category_key_name> value: <category_value> controlPlane: 6 hyperthreading: Enabled 7 name: master replicas: 3 platform: nutanix: 8 cpus: 4 coresPerSocket: 2 memoryMiB: 16384 osDisk: diskSizeGiB: 120 categories: 9 - key: <category_key_name> value: <category_value> metadata: creationTimestamp: null name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: nutanix: apiVIP: 10.40.142.7 12 ingressVIP: 10.40.142.8 13 defaultMachinePlatform: bootType: Legacy categories: 14 - key: <category_key_name> value: <category_value> project: 15 type: name name: <project_name> prismCentral: endpoint: address: your.prismcentral.domainname 16 port: 9440 17 password: <password> 18 username: <username> 19 prismElements: - endpoint: address: your.prismelement.domainname port: 9440 uuid: 0005b0f1-8f43-a0f2-02b7-3cecef193712 subnetUUIDs: - c7938dc6-7659-453e-a688-e26020c68e43 clusterOSImage: http://example.com/images/rhcos-47.83.202103221318-0-nutanix.x86_64.qcow2 20 credentialsMode: Manual publish: External pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23 additionalTrustBundle: | 24 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 25 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "apiVersion: v1 baseDomain: example.com compute: platform: nutanix: failureDomains: - name: <failure_domain_name> prismElement: name: <prism_element_name> uuid: <prism_element_uuid> subnetUUIDs: - <network_uuid>", "apiVersion: v1 baseDomain: example.com compute: platform: nutanix: defaultMachinePlatform: failureDomains: - failure-domain-1 - failure-domain-2 - failure-domain-3", "apiVersion: v1 baseDomain: example.com compute: controlPlane: platform: nutanix: failureDomains: - failure-domain-1 - failure-domain-2 - failure-domain-3 compute: platform: nutanix: failureDomains: - failure-domain-1 - failure-domain-2", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "credentials: - type: basic_auth 1 data: prismCentral: 2 username: <username_for_prism_central> password: <password_for_prism_central> prismElements: 3 - name: <name_of_prism_element> username: <username_for_prism_element> password: <password_for_prism_element>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: annotations: include.release.openshift.io/self-managed-high-availability: \"true\" labels: controller-tools.k8s.io: \"1.0\" name: openshift-machine-api-nutanix namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: NutanixProviderSpec secretRef: name: nutanix-credentials namespace: openshift-machine-api", "ccoctl nutanix create-shared-secrets --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 1 --output-dir=<ccoctl_output_dir> \\ 2 --credentials-source-filepath=<path_to_credentials_file> 3", "apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1", "openshift-install create manifests --dir <installation_directory> 1", "cp <ccoctl_output_dir>/manifests/*credentials.yaml ./<installation_directory>/manifests", "ls ./<installation_directory>/manifests", "cluster-config.yaml cluster-dns-02-config.yml cluster-infrastructure-02-config.yml cluster-ingress-02-config.yml cluster-network-01-crd.yml cluster-network-02-config.yml cluster-proxy-01-config.yaml cluster-scheduler-02-config.yml cvo-overrides.yaml kube-cloud-config.yaml kube-system-configmap-root-ca.yaml machine-config-server-tls-secret.yaml openshift-config-secret-pull-secret.yaml openshift-cloud-controller-manager-nutanix-credentials-credentials.yaml openshift-machine-api-nutanix-credentials-credentials.yaml", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'", "oc apply -f ./oc-mirror-workspace/results-<id>/", "oc get imagecontentsourcepolicy", "oc get catalogsource --all-namespaces", "apiVersion: v1 baseDomain: example.com compute: - name: worker platform: {} replicas: 0", "./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2", "apiVersion:", "baseDomain:", "metadata:", "metadata: name:", "platform:", "pullSecret:", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking:", "networking: networkType:", "networking: clusterNetwork:", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: clusterNetwork: cidr:", "networking: clusterNetwork: hostPrefix:", "networking: serviceNetwork:", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork:", "networking: machineNetwork: - cidr: 10.0.0.0/16", "networking: machineNetwork: cidr:", "additionalTrustBundle:", "capabilities:", "capabilities: baselineCapabilitySet:", "capabilities: additionalEnabledCapabilities:", "cpuPartitioningMode:", "compute:", "compute: architecture:", "compute: hyperthreading:", "compute: name:", "compute: platform:", "compute: replicas:", "featureSet:", "controlPlane:", "controlPlane: architecture:", "controlPlane: hyperthreading:", "controlPlane: name:", "controlPlane: platform:", "controlPlane: replicas:", "credentialsMode:", "fips:", "imageContentSources:", "imageContentSources: source:", "imageContentSources: mirrors:", "publish:", "sshKey:", "compute: platform: nutanix: categories: key:", "compute: platform: nutanix: categories: value:", "compute: platform: nutanix: failureDomains:", "compute: platform: nutanix: project: type:", "compute: platform: nutanix: project: name: or uuid:", "compute: platform: nutanix: bootType:", "controlPlane: platform: nutanix: categories: key:", "controlPlane: platform: nutanix: categories: value:", "controlPlane: platform: nutanix: failureDomains:", "controlPlane: platform: nutanix: project: type:", "controlPlane: platform: nutanix: project: name: or uuid:", "platform: nutanix: defaultMachinePlatform: categories: key:", "platform: nutanix: defaultMachinePlatform: categories: value:", "platform: nutanix: defaultMachinePlatform: failureDomains:", "platform: nutanix: defaultMachinePlatform: project: type:", "platform: nutanix: defaultMachinePlatform: project: name: or uuid:", "platform: nutanix: defaultMachinePlatform: bootType:", "platform: nutanix: apiVIP:", "platform: nutanix: failureDomains: - name: prismElement: name: uuid: subnetUUIDs: -", "platform: nutanix: ingressVIP:", "platform: nutanix: prismCentral: endpoint: address:", "platform: nutanix: prismCentral: endpoint: port:", "platform: nutanix: prismCentral: password:", "platform: nutanix: prismCentral: username:", "platform: nutanix: prismElements: endpoint: address:", "platform: nutanix: prismElements: endpoint: port:", "platform: nutanix: prismElements: uuid:", "platform: nutanix: subnetUUIDs:", "platform: nutanix: clusterOSImage:" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html-single/installing_on_nutanix/index
Chapter 4. Determining hardware and OS configuration
Chapter 4. Determining hardware and OS configuration CPU The more physical cores that are available to Satellite, the higher throughput can be achieved for the tasks. Some of the Satellite components such as Puppet and PostgreSQL are CPU intensive applications and can really benefit from the higher number of available CPU cores. Memory The higher amount of memory available in the system running Satellite, the better will be the response times for the Satellite operations. Since Satellite uses PostgreSQL as the database solutions, any additional memory coupled with the tunings will provide a boost to the response times of the applications due to increased data retention in the memory. Disk With Satellite doing heavy IOPS due to repository synchronizations, package data retrieval, high frequency database updates for the subscription records of the content hosts, it is advised that Satellite be installed on a high speed SSD so as to avoid performance bottlenecks which may happen due to increased disk reads or writes. Satellite requires disk IO to be at or above 60 - 80 megabytes per second of average throughput for read operations. Anything below this value can have severe implications for the operation of the Satellite. Satellite components such as PostgreSQL benefit from using SSDs due to their lower latency compared to HDDs. Network The communication between the Satellite Server and Capsules is impacted by the network performance. A decent network with a minimum jitter and low latency is required to enable hassle free operations such as Satellite Server and Capsules synchronization (at least ensure it is not causing connection resets, etc). Server Power Management Your server by default is likely to be configured to conserve power. While this is a good approach to keep the max power consumption in check, it also has a side effect of lowering the performance that Satellite may be able to achieve. For a server running Satellite, it is recommended to set the BIOS to enable the system to be run in performance mode to boost the maximum performance levels that Satellite can achieve. 4.1. Benchmarking disk performance We are working to update satellite-maintain to only warn the users when its internal quick storage benchmark results in numbers below our recommended throughput. Also working on an updated benchmark script you can run (which will likely be integrated into satellite-maintain in the future) to get a more accurate real-world storage information. Note You may have to temporarily reduce the RAM in order to run the I/O benchmark. For example, if your Satellite Server has 256 GiB RAM, tests would require 512 GiB of storage to run. As a workaround, you can add mem=20G kernel option in grub during system boot to temporary reduce the size of the RAM. The benchmark creates a file twice the size of the RAM in the specified directory and executes a series of storage I/O tests against it. The size of the file ensures that the test is not just testing the filesystem caching. If you benchmark other filesystems, for example smaller volumes such as PostgreSQL storage, you might have to reduce the RAM size as described above. If you are using different storage solutions such as SAN or iSCSI, you can expect a different performance. Red Hat recommends you to stop all services before executing this script and you will be prompted to do so. This test does not use direct I/O and will utilize file caching as normal operations would. You can find our first version of the script storage-benchmark . To execute it, just download the script to your Satellite, make it executable, and run: As noted in the README block in the script, generally you wish to see on average 100MB/sec or higher in the tests below: Local SSD based storage should give values of 600MB/sec or higher. Spinning disks should give values in the range of 100 - 200MB/sec or higher. If you see values below this, please open a support ticket for assistance. For more information, see Impact of Disk Speed on Satellite Operations . 4.2. Enabling tuned profiles On bare-metal, Red Hat recommends to run the throughput-performance tuned profile on Satellite Server and Capsules. On virtual machines, Red Hat recommends to run the virtual-guest profile. Procedure Check if tuned is running: If tuned is not running, enable it: Optional: View a list of available tuned profiles: Enable a tuned profile depending on your scenario: 4.3. Disable Transparent Hugepage Transparent Hugepage is a memory management technique used by the Linux kernel to reduce the overhead of using the Translation Lookaside Buffer (TLB) by using larger sized memory pages. Due to databases having Sparse Memory Access patterns instead of Contiguous Memory access patterns, database workloads often perform poorly when Transparent Hugepage is enabled. To improve PostgreSQL and Redis performance, disable Transparent Hugepage. In deployments where the databases are running on separate servers, there may be a small benefit to using Transparent Hugepage on the Satellite Server only. For more information on how to disable Transparent Hugepage, see How to disable transparent hugepages (THP) on Red Hat Enterprise Linux .
[ "./storage-benchmark /var/lib/pulp", "systemctl status tuned", "systemctl enable --now tuned", "tuned-adm list", "tuned-adm profile \" My_Tuned_Profile \"" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/tuning_performance_of_red_hat_satellite/Determining_Hardware_and_OS_Configuration_performance-tuning
B.105. xorg-x11-drv-qxl
B.105. xorg-x11-drv-qxl B.105.1. RHBA-2010:0917 - xorg-x11-drv-qxl bug fix update An updated xorg-x11-drv-qxl package that fixes various bugs is now available. xorg-x11-qxl-drv is an X11 video driver for the QEMU QXL video accelerator. This driver makes it possible to use Red Hat Enterprise Linux 6 as a guest operating system under KVM and QEMU, using the SPICE protocol. This updated xorg-x11-drv-qxl package includes fixes for the following bugs: BZ# 648933 When using the xql driver, only a limited number of resolution choices were available for use inside the guest, none of which exceeded 1024x768 in size unless the xorg.conf configuration file was (first created, and then) manually edited. This update ensures that larger resolutions are available for guests with appropriate hardware without needing to manually change xorg.conf. BZ# 648935 When using the qxl driver, after connecting to a virtual guest over the SPICE protocol and logging into a desktop session from the GDM display manager, attempting to switch to a virtual console using a key combination caused the X server to crash, and GDM to respawn. This update fixes this issue so that, in the aforementioned situation, switching to a virtual console and back to the graphical desktop works as expected. All users of KVM-based virtualization are advised to upgrade to this updated package, which fixes these issues.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/xorg-x11-drv-qxl
Chapter 39. JMS - IBM MQ Kamelet Sink
Chapter 39. JMS - IBM MQ Kamelet Sink A Kamelet that can produce events to an IBM MQ message queue using JMS. 39.1. Configuration Options The following table summarizes the configuration options available for the jms-ibm-mq-sink Kamelet: Property Name Description Type Default Example channel * IBM MQ Channel Name of the IBM MQ Channel string destinationName * Destination Name The destination name string password * Password Password to authenticate to IBM MQ server string queueManager * IBM MQ Queue Manager Name of the IBM MQ Queue Manager string serverName * IBM MQ Server name IBM MQ Server name or address string serverPort * IBM MQ Server Port IBM MQ Server port integer 1414 username * Username Username to authenticate to IBM MQ server string clientId IBM MQ Client ID Name of the IBM MQ Client ID string destinationType Destination Type The JMS destination type (queue or topic) string "queue" Note Fields marked with an asterisk (*) are mandatory. 39.2. Dependencies At runtime, the jms-ibm-mq-sink Kamelet relies upon the presence of the following dependencies: camel:jms camel:kamelet mvn:com.ibm.mq:com.ibm.mq.allclient:9.2.5.0 39.3. Usage This section describes how you can use the jms-ibm-mq-sink . 39.3.1. Knative Sink You can use the jms-ibm-mq-sink Kamelet as a Knative sink by binding it to a Knative object. jms-ibm-mq-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jms-ibm-mq-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel properties: serverName: "10.103.41.245" serverPort: "1414" destinationType: "queue" destinationName: "DEV.QUEUE.1" queueManager: QM1 channel: DEV.APP.SVRCONN username: app password: passw0rd 39.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 39.3.1.2. Procedure for using the cluster CLI Save the jms-ibm-mq-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f jms-ibm-mq-sink-binding.yaml 39.3.1.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind --name jms-ibm-mq-sink-binding timer-source?message="Hello IBM MQ!" 'jms-ibm-mq-sink?serverName=10.103.41.245&serverPort=1414&destinationType=queue&destinationName=DEV.QUEUE.1&queueManager=QM1&channel=DEV.APP.SVRCONN&username=app&password=passw0rd' This command creates the KameletBinding in the current namespace on the cluster. 39.3.2. Kafka Sink You can use the jms-ibm-mq-sink Kamelet as a Kafka sink by binding it to a Kafka topic. jms-ibm-mq-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jms-ibm-mq-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: jms-ibm-mq-sink properties: serverName: "10.103.41.245" serverPort: "1414" destinationType: "queue" destinationName: "DEV.QUEUE.1" queueManager: QM1 channel: DEV.APP.SVRCONN username: app password: passw0rd 39.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 39.3.2.2. Procedure for using the cluster CLI Save the jms-ibm-mq-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f jms-ibm-mq-sink-binding.yaml 39.3.2.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind --name jms-ibm-mq-sink-binding timer-source?message="Hello IBM MQ!" 'jms-ibm-mq-sink?serverName=10.103.41.245&serverPort=1414&destinationType=queue&destinationName=DEV.QUEUE.1&queueManager=QM1&channel=DEV.APP.SVRCONN&username=app&password=passw0rd' This command creates the KameletBinding in the current namespace on the cluster. 39.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/jms-ibm-mq-sink.kamelet.yaml
[ "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jms-ibm-mq-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel properties: serverName: \"10.103.41.245\" serverPort: \"1414\" destinationType: \"queue\" destinationName: \"DEV.QUEUE.1\" queueManager: QM1 channel: DEV.APP.SVRCONN username: app password: passw0rd", "apply -f jms-ibm-mq-sink-binding.yaml", "kamel bind --name jms-ibm-mq-sink-binding timer-source?message=\"Hello IBM MQ!\" 'jms-ibm-mq-sink?serverName=10.103.41.245&serverPort=1414&destinationType=queue&destinationName=DEV.QUEUE.1&queueManager=QM1&channel=DEV.APP.SVRCONN&username=app&password=passw0rd'", "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jms-ibm-mq-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: jms-ibm-mq-sink properties: serverName: \"10.103.41.245\" serverPort: \"1414\" destinationType: \"queue\" destinationName: \"DEV.QUEUE.1\" queueManager: QM1 channel: DEV.APP.SVRCONN username: app password: passw0rd", "apply -f jms-ibm-mq-sink-binding.yaml", "kamel bind --name jms-ibm-mq-sink-binding timer-source?message=\"Hello IBM MQ!\" 'jms-ibm-mq-sink?serverName=10.103.41.245&serverPort=1414&destinationType=queue&destinationName=DEV.QUEUE.1&queueManager=QM1&channel=DEV.APP.SVRCONN&username=app&password=passw0rd'" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.5/html/kamelets_reference/jms-ibm-mq-sink
Using AMQ Streams on RHEL
Using AMQ Streams on RHEL Red Hat Streams for Apache Kafka 2.5 Configure and manage a deployment of AMQ Streams 2.5 on Red Hat Enterprise Linux
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/using_amq_streams_on_rhel/index
Installing on OpenShift Container Platform
Installing on OpenShift Container Platform Red Hat Ansible Automation Platform 2.5 Install and configure Ansible Automation Platform operator on OpenShift Container Platform Red Hat Customer Content Services
[ "new-project ansible-automation-platform", "--- apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: \"true\" name: ansible-automation-platform --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: ansible-automation-platform-operator namespace: ansible-automation-platform spec: targetNamespaces: - ansible-automation-platform --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: ansible-automation-platform namespace: ansible-automation-platform spec: channel: 'stable-2.5' installPlanApproval: Automatic name: ansible-automation-platform-operator source: redhat-operators sourceNamespace: openshift-marketplace ---", "apply -f sub.yaml", "get csv -n ansible-automation-platform NAME DISPLAY VERSION REPLACES PHASE aap-operator.v2.5.0-0.1728520175 Ansible Automation Platform 2.5.0+0.1728520175 aap-operator.v2.5.0-0.1727875185 Succeeded", "apply -f - <<EOF apiVersion: aap.ansible.com/v1alpha1 kind: AnsibleAutomationPlatform metadata: name: example namespace: ansible-automation-platform spec: # Platform image_pull_policy: IfNotPresent # Components controller: disabled: false eda: disabled: false hub: disabled: false ## Modify to contain your RWM storage class name storage_type: file file_storage_storage_class: <your-read-write-many-storage-class> file_storage_size: 10Gi ## uncomment if using S3 storage for Content pod # storage_type: S3 # object_storage_s3_secret: example-galaxy-object-storage ## uncomment if using Azure storage for Content pod # storage_type: azure # object_storage_azure_secret: azure-secret-name lightspeed: disabled: true EOF", "get routes -n <platform_namespace>", "oc get routes -n ansible-automation-platform NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD example example-ansible-automation-platform.apps-crc.testing example-service http edge/Redirect None", "get secret/<your instance name>-<admin_user>-password -o yaml", "get secret/example-admin-password -o yaml", "oc get secret/example-admin-password -o yaml apiVersion: v1 data: password: ODzLODzLODzLODzLODzLODzLODzLODzLODzLODzLODzL kind: Secret metadata: labels: app.kubernetes.io/component: aap app.kubernetes.io/name: example app.kubernetes.io/operator-version: \"\" app.kubernetes.io/part-of: example name: example-admin-password namespace: ansible-automation-platform", "get secret/example-admin-password -o jsonpath={.data.password} | base64 --decode", "spec: database: resource_requirements: requests: cpu: 200m memory: 512Mi storage_requirements: requests: storage: 100Gi controller: disabled: false eda: disabled: false hub: disabled: false storage_type: file file_storage_storage_class: <read-write-many-storage-class> file_storage_size: 10Gi", "apiVersion: aap.ansible.com/v1alpha1 kind: AnsibleAutomationPlatform metadata: name: example-aap namespace: aap spec: database: resource_requirements: requests: cpu: 200m memory: 512Mi storage_requirements: requests: storage: 100Gi # Platform image_pull_policy: IfNotPresent # Components controller: disabled: false name: existing-controller-name eda: disabled: false hub: disabled: false ## uncomment if using file storage for Content pod storage_type: file file_storage_storage_class: <your-read-write-many-storage-class> file_storage_size: 10Gi ## uncomment if using S3 storage for Content pod # storage_type: S3 # object_storage_s3_secret: example-galaxy-object-storage ## uncomment if using Azure storage", "apiVersion: v1 kind: Secret metadata: name: external-postgres-configuration namespace: <target_namespace> 1 stringData: host: \"<external_ip_or_url_resolvable_by_the_cluster>\" 2 port: \"<external_port>\" 3 database: \"<desired_database_name>\" username: \"<username_to_connect_as>\" password: \"<password_to_connect_with>\" 4 type: \"unmanaged\" type: Opaque", "oc create -f external-postgres-configuration-secret.yml", "apiVersion: aap.ansible.com/v1alpha1 kind: AnsibleAutomationPlatform metadata: name: example-aap Namespace: aap spec: database: database_secret: automation-platform-postgres-configuration", "spec: extra_settings: - setting: REDIRECT_IS_HTTPS value: '\"True\"'", "exec -it <gateway-pod-name> -- grep REDIRECT /etc/ansible-automation-platform/gateway/settings.py", "oc create secret generic <resourcename>-custom-certs --from-file=ldap-ca.crt=<PATH/TO/YOUR/CA/PEM/FILE> \\ 1", "oc get secret/mycerts -o yaml apiVersion: v1 data: ldap-ca.crt: <mysecret> 1 kind: Secret metadata: name: mycerts namespace: AutomationController type: Opaque", "apiVersion: v1 kind: Secret metadata: name: external-postgres-configuration namespace: <target_namespace> 1 stringData: host: \"<external_ip_or_url_resolvable_by_the_cluster>\" 2 port: \"<external_port>\" 3 database: \"<desired_database_name>\" username: \"<username_to_connect_as>\" password: \"<password_to_connect_with>\" 4 sslmode: \"prefer\" 5 type: \"unmanaged\" type: Opaque", "oc create -f external-postgres-configuration-secret.yml", "apiVersion: automationcontroller.ansible.com/v1beta1 kind: AutomationController metadata: name: controller-dev spec: postgres_configuration_secret: external-postgres-configuration", "get pvc -n <namespace>", "delete pvc -n <namespace> <pvc-name>", "oc -n USDHUB_NAMESPACE apply -f- <<EOF apiVersion: v1 kind: Secret metadata: name: 'test-s3' stringData: s3-access-key-id: USDS3_ACCESS_KEY_ID s3-secret-access-key: USDS3_SECRET_ACCESS_KEY s3-bucket-name: USDS3_BUCKET_NAME s3-region: USDS3_REGION EOF", "spec: object_storage_s3_secret: test-s3", "oc -n USDHUB_NAMESPACE delete pod -l app.kubernetes.io/name=<hub-name>-api", "oc -n USDHUB_NAMESPACE apply -f- <<EOF apiVersion: v1 kind: Secret metadata: name: 'test-azure' stringData: azure-account-name: USDAZURE_ACCOUNT_NAME azure-account-key: USDAZURE_ACCOUNT_KEY azure-container: USDAZURE_CONTAINER azure-container-path: USDAZURE_CONTAINER_PATH azure-connection-string: USDAZURE_CONNECTION_STRING EOF", "spec: object_storage_azure_secret: test-azure", "oc -n USDHUB_NAMESPACE delete pod -l app.kubernetes.io/name=<hub-name>-api", "apiVersion: v1 kind: Secret metadata: name: external-postgres-configuration namespace: <target_namespace> 1 stringData: host: \"<external_ip_or_url_resolvable_by_the_cluster>\" 2 port: \"<external_port>\" 3 database: \"<desired_database_name>\" username: \"<username_to_connect_as>\" password: \"<password_to_connect_with>\" 4 sslmode: \"prefer\" 5 type: \"unmanaged\" type: Opaque", "oc create -f external-postgres-configuration-secret.yml", "apiVersion: automationhub.ansible.com/v1beta1 kind: AutomationHub metadata: name: hub-dev spec: postgres_configuration_secret: external-postgres-configuration", "psql -d <automation hub database> -c \"SELECT * FROM pg_available_extensions WHERE name='hstore'\"", "name | default_version | installed_version |comment ------+-----------------+-------------------+--------------------------------------------------- hstore | 1.7 | | data type for storing sets of (key, value) pairs (1 row)", "name | default_version | installed_version | comment ------+-----------------+-------------------+--------- (0 rows)", "dnf install postgresql-contrib", "psql -d <automation hub database> -c \"CREATE EXTENSION hstore;\"", "name | default_version | installed_version | comment -----+-----------------+-------------------+------------------------------------------------------ hstore | 1.7 | 1.7 | data type for storing sets of (key, value) pairs (1 row)", "get pvc -n <namespace>", "delete pvc -n <namespace> <pvc-name>", "spec: pulp_settings: ansible_collect_download_count: true", "spec: registrySources: allowedRegistries: - quay.io - registry.redhat.io - image-registry.openshift-image-registry.svc:5000 - <OCP route for your automation hub>", "--- apiVersion: v1 kind: Secret metadata: name: <controller-resourcename>-secret-key namespace: <target-namespace> stringData: secret_key: <content of /etc/tower/SECRET_KEY from old controller> type: Opaque --- apiVersion: v1 kind: Secret metadata: name: <eda-resourcename>-secret-key namespace: <target-namespace> stringData: secret_key: </etc/ansible-automation-platform/eda/SECRET_KEY> type: Opaque --- apiVersion: v1 kind: Secret metadata: name: <hub-resourcename>-secret-key namespace: <target-namespace> stringData: database_fields.symmetric.key: </etc/pulp/certs/database_fields.symmetric.key> type: Opaque", "apply -f <yaml-file>", "apiVersion: v1 kind: Secret metadata: name: <resourcename>-old-postgres-configuration namespace: <target namespace> stringData: host: \"<external ip or url resolvable by the cluster>\" port: \"<external port, this usually defaults to 5432>\" database: \"<desired database name>\" username: \"<username to connect as>\" password: \"<password to connect with>\" type: Opaque", "apply -f <old-postgres-configuration.yml>", "apiVersion: v1 kind: Pod metadata: name: dbchecker spec: containers: - name: dbchecker image: registry.redhat.io/rhel8/postgresql-13:latest command: [\"sleep\"] args: [\"600\"]", "project ansible-automation-platform apply -f connection_checker.yaml", "get pods", "rsh dbchecker", "pg_isready -h <old-host-address> -p <old-port-number> -U AutomationContoller", "<old-host-address>:<old-port-number> - accepting connections", "--- apiVersion: aap.ansible.com/v1alpha1 kind: AnsibleAutomationPlatform metadata: name: myaap spec: postgres_configuration_secret: external-postgres-configuration controller: disabled: false postgres_configuration_secret: external-controller-postgres-configuration secret_key_secret: controller-secret-key hub: disabled: false postgres_configuration_secret: external-hub-postgres-configuration db_fields_encryption_secret: hub-db-fields-encryption-secret", "--- apiVersion: aap.ansible.com/v1alpha1 kind: AnsibleAutomationPlatform metadata: name: myaap spec: # Development purposes only no_log: false controller: name: existing-controller #obtain name from controller CR disabled: false eda: disabled: false hub: name: existing-hub disabled: false", "all: hosts: remote-execution: ansible_host: example_host_name # Same with configured in AAP WebUI ansible_user: <username> #user provided Ansible_ssh_private_key_file: ~/.ssh/id_example", "ansible-playbook install_receptor.yml -i inventory.yml", "sudo systemctl status receptor.service", "watch podman ps", "apiVersion: v1 kind: Secret metadata: name: controller-access type: Opaque stringData: token: <generated-token> host: https://my-controller-host.example.com/", "create -f controller-connection-secret.yml", "apiVersion: tower.ansible.com/v1alpha1 kind: AnsibleJob metadata: generateName: demo-job-1 # generate a unique suffix per 'kubectl create' spec: connection_secret: controller-access job_template_name: Demo Job Template", "spec: connection_secret: controller-access job_template_name: Demo Job Template inventory: Demo Inventory # Inventory prompt on launch needs to be enabled runner_image: quay.io/ansible/controller-resource-runner runner_version: latest job_ttl: 100 extra_vars: # Extra variables prompt on launch needs to be enabled test_var: test job_tags: \"provision,install,configuration\" # Specify tags to run skip_tags: \"configuration,restart\" # Skip tasks with a given tag", "apiVersion: tower.ansible.com/v1alpha1 kind: AnsibleJob metadata: generateName: demo-job-1 # generate a unique suffix per 'kubectl create' spec: connection_secret: controller-access workflow_template_name: Demo Workflow Template", "apiVersion: tower.ansible.com/v1alpha1 kind: JobTemplate metadata: name: jobtemplate-4 spec: connection_secret: controller-access job_template_name: ExampleJobTemplate4 job_template_project: Demo Project job_template_playbook: hello_world.yml job_template_inventory: Demo Inventory", "--- apiVersion: aap.ansible.com/v1alpha1 kind: AnsibleAutomationPlatform metadata: name: myaap spec: # Development purposes only no_log: false controller: name: existing-controller disabled: false eda: disabled: false hub: name: existing-hub disabled: false", "apiVersion: aap.ansible.com/v1alpha1 kind: AnsibleAutomationPlatform metadata: name: myaap spec: # Development purposes only no_log: false # Platform ## uncomment to test bundle certs # bundle_cacert_secret: gateway-custom-certs # Components hub: disabled: false ## uncomment if using file storage for Content pod storage_type: file file_storage_storage_class: nfs-local-rwx file_storage_size: 10Gi ## uncomment if using S3 storage for Content pod # storage_type: S3 # object_storage_s3_secret: example-galaxy-object-storage ## uncomment if using Azure storage for Content pod # storage_type: azure # object_storage_azure_secret: azure-secret-name # lightspeed: # disabled: true End state: * Automation controller deployed and named: myaap-controller * * Event-Driven Ansible deployed and named: myaap-eda * * Automation hub deployed and named: myaap-hub", "--- apiVersion: aap.ansible.com/v1alpha1 kind: AnsibleAutomationPlatform metadata: name: myaap spec: # Development purposes only no_log: false controller: name: existing-controller eda: disabled: true hub: disabled: true ## uncomment if using file storage for Content pod # storage_type: file # file_storage_storage_class: nfs-local-rwx # file_storage_size: 10Gi ## uncomment if using S3 storage for Content pod # storage_type: S3 # object_storage_s3_secret: example-galaxy-object-storage ## uncomment if using Azure storage for Content pod # storage_type: azure # object_storage_azure_secret: azure-secret-name End state: * Automation controller: existing-controller registered with Ansible Automation Platform UI * * Event-Driven Ansible deployed and named: myaap-eda * * Automation hub deployed and named: myaap-hub", "--- apiVersion: aap.ansible.com/v1alpha1 kind: AnsibleAutomationPlatform metadata: name: myaap spec: # Development purposes only no_log: false controller: name: existing-controller disabled: false eda: disabled: true hub: name: existing-hub disabled: false End state: * Automation controller: existing-controller registered with Ansible Automation Platform UI * * Event-Driven Ansible deployed and named: myaap-eda * * Automation hub: existing-hub registered with Ansible Automation Platform UI", "--- apiVersion: aap.ansible.com/v1alpha1 kind: AnsibleAutomationPlatform metadata: name: myaap spec: # Development purposes only no_log: false controller: name: existing-controller # <-- this is the name of the existing AutomationController CR disabled: false eda: name: existing-eda disabled: false hub: name: existing-hub disabled: false End state: * Controller: existing-controller registered with Ansible Automation Platform UI * * Event-Driven Ansible: existing-eda registered with Ansible Automation Platform UI * * Automation hub: existing-hub registered with Ansible Automation Platform UI # Note: The automation controller, Event-Driven Ansible, and automation hub names must match the names of the existing. Automation controller, Event-Driven Ansible, and automation hub CRs in the same namespace as the Ansible Automation Platform CR. If the names do not match, the Ansible Automation Platform CR will not be able to register the existing automation controller, Event-Driven Ansible, and automation hub with the Ansible Automation Platform UI,and will instead deploy new automation controller, Event-Driven Ansible, and automation hub instances.", "--- apiVersion: aap.ansible.com/v1alpha1 kind: AnsibleAutomationPlatform metadata: name: myaap spec: # Development purposes only no_log: false controller: name: existing-controller # <-- this is the name of the existing AutomationController CR disabled: false eda: name: existing-eda disabled: false hub: name: existing-hub disabled: false End state: * Automation controller: existing-controller registered with Ansible Automation Platform UI * * Event-Driven Ansible: existing-eda registered with Ansible Automation Platform UI * * Automation hub: existing-hub registered with Ansible Automation Platform UI # Note: The automation controller, Event-Driven Ansible, and automation hub names must match the names of the existing. Automation controller, Event-Driven Ansible, and automation hub CRs in the same namespace as the Ansible Automation Platform CR. If the names do not match, the Ansible Automation Platform CR will not be able to register the existing automation controller, Event-Driven Ansible, and automation hub with the Ansible Automation Platform UI,and will instead deploy new automation controller, Event-Driven Ansible, and automation hub instances.", "--- apiVersion: aap.ansible.com/v1alpha1 kind: AnsibleAutomationPlatform metadata: name: myaap spec: # Development purposes only no_log: false controller: disabled: false eda: disabled: false hub: disabled: true ## uncomment if using file storage for Content pod storage_type: file file_storage_storage_class: nfs-local-rwx file_storage_size: 10Gi ## uncomment if using S3 storage for Content pod # storage_type: S3 # object_storage_s3_secret: example-galaxy-object-storage ## uncomment if using Azure storage for Content pod # storage_type: azure # object_storage_azure_secret: azure-secret-name End state: * Automation controller deployed and named: myaap-controller * * Event-Driven Ansible deployed and named: myaap-eda * * Automation hub disabled * Red Hat Ansible Lightspeed disabled", "--- apiVersion: aap.ansible.com/v1alpha1 kind: AnsibleAutomationPlatform metadata: name: myaap spec: # Development purposes only no_log: false controller: disabled: false eda: disabled: false hub: disabled: false ## uncomment if using file storage for Content pod storage_type: file file_storage_storage_class: nfs-local-rwx file_storage_size: 10Gi ## uncomment if using S3 storage for Content pod # storage_type: S3 # object_storage_s3_secret: example-galaxy-object-storage ## uncomment if using Azure storage for Content pod # storage_type: azure # object_storage_azure_secret: azure-secret-name End state: * Automation controller deployed and named: myaap-controller * * Event-Driven Ansible deployed and named: myaap-eda * * Automation hub deployed and named: myaap-hub", "--- apiVersion: aap.ansible.com/v1alpha1 kind: AnsibleAutomationPlatform metadata: name: myaap spec: database: database_secret: external-postgres-configuration-gateway controller: postgres_configuration_secret: external-postgres-configuration-controller hub: postgres_configuration_secret: external-postgres-configuration-hub eda: database: database_secret: external-postgres-configuration-eda", "--- apiVersion: aap.ansible.com/v1alpha1 kind: AnsibleAutomationPlatform metadata: name: myaap spec: database: database_secret: external-postgres-configuration-gateway", "--- apiVersion: aap.ansible.com/v1alpha1 kind: AnsibleAutomationPlatform metadata: name: myaap spec: database: database_secret: external-postgres-configuration-gateway controller: postgres_configuration_secret: external-postgres-configuration-controller hub: postgres_configuration_secret: external-postgres-configuration-hub eda: database: database_secret: external-postgres-configuration-eda lightspeed: disabled: false database: database_secret: <secret-name>-postgres-configuration auth_config_secret_name: 'auth-configuration-secret' model_config_secret_name: 'model-configuration-secret'", "--- apiVersion: aap.ansible.com/v1alpha1 kind: AnsibleAutomationPlatform metadata: name: myaap spec: # Development purposes only no_log: false # Platform ## uncomment to test bundle certs # bundle_cacert_secret: gateway-custom-certs # Components controller: disabled: false extra_settings: - setting: ALLOW_LOCAL_RESOURCE_MANAGEMENT value: 'True' eda: disabled: false extra_settings: - setting: EDA_ALLOW_LOCAL_RESOURCE_MANAGEMENT value: '@bool True' hub: disabled: false ## uncomment if using file storage for Content pod storage_type: file file_storage_storage_class: nfs-local-rwx file_storage_size: 10Gi pulp_settings: ALLOW_LOCAL_RESOURCE_MANAGEMENT: True # cache_enabled: false # redirect_to_object_storage: \"False\" # analytics: false # galaxy_collection_signing_service: \"\" # galaxy_container_signing_service: \"\" # token_auth_disabled: 'False' # token_signature_algorithm: 'ES256' ## uncomment if using S3 storage for Content pod # storage_type: S3 # object_storage_s3_secret: example-galaxy-object-storage ## uncomment if using Azure storage for Content pod # storage_type: azure # object_storage_azure_secret: azure-secret-name # Development purposes only no_log: false # lightspeed: # disabled: true End state: * Automation controller deployed and named: myaap-controller * * Event-Driven Ansible deployed and named: myaap-eda * * Automation hub deployed and named: myaap-hub", "--- apiVersion: aap.ansible.com/v1alpha1 kind: AnsibleAutomationPlatform metadata: name: myaap spec: # Development purposes only no_log: false image_pull_policy: Always # Platform ## uncomment to test bundle certs # bundle_cacert_secret: gateway-custom-certs # Components controller: disabled: false image_pull_policy: Always extra_settings: - setting: MAX_PAGE_SIZE value: '501' eda: disabled: false image_pull_policy: Always extra_settings: - setting: EDA_MAX_PAGE_SIZE value: '501' hub: disabled: false image_pull_policy: Always ## uncomment if using file storage for Content pod storage_type: file file_storage_storage_class: rook-cephfs file_storage_size: 10Gi ## uncomment if using S3 storage for Content pod # storage_type: S3 # object_storage_s3_secret: example-galaxy-object-storage ## uncomment if using Azure storage for Content pod # storage_type: azure # object_storage_azure_secret: azure-secret-name pulp_settings: MAX_PAGE_SIZE: 501 cache_enabled: false # lightspeed: # disabled: true End state: * Automation controller deployed and named: myaap-controller * * Event-Driven Ansible deployed and named: myaap-eda * * Automation hub deployed and named: myaap-hub", "--- apiVersion: aap.ansible.com/v1alpha1 kind: AnsibleAutomationPlatform metadata: name: myaap spec: # Development purposes only no_log: false # Redis Mode # redis_mode: cluster # Platform ## uncomment to test bundle certs # bundle_cacert_secret: gateway-custom-certs # extra_settings: # - setting: MAX_PAGE_SIZE # value: '501' # Components controller: disabled: false eda: disabled: false hub: disabled: false ## uncomment if using file storage for Content pod storage_type: file file_storage_storage_class: nfs-local-rwx file_storage_size: 10Gi ## uncomment if using S3 storage for Content pod # storage_type: S3 # object_storage_s3_secret: example-galaxy-object-storage ## uncomment if using Azure storage for Content pod # storage_type: azure # object_storage_azure_secret: azure-secret-name # lightspeed: # disabled: true End state: * Automation controller deployed and named: myaap-controller * * Event-Driven Ansible deployed and named: myaap-eda * * Automation hub deployed and named: myaap-hub", "--- apiVersion: aap.ansible.com/v1alpha1 kind: AnsibleAutomationPlatform metadata: name: myaap spec: # Development purposes only no_log: false controller: disabled: false eda: disabled: true hub: disabled: true ## uncomment if using file storage for Content pod # storage_type: file # file_storage_storage_class: nfs-local-rwx # file_storage_size: 10Gi ## uncomment if using S3 storage for Content pod # storage_type: S3 # object_storage_s3_secret: example-galaxy-object-storage ## uncomment if using Azure storage for Content pod # storage_type: azure # object_storage_azure_secret: azure-secret-name End state: * Automation controller: existing-controller registered with Ansible Automation Platform UI * * Event-Driven Ansible deployed and named: myaap-eda * * Automation hub deployed and named: myaap-hub", "--- apiVersion: aap.ansible.com/v1alpha1 kind: AnsibleAutomationPlatform metadata: name: myaap spec: # Development purposes only no_log: false controller: disabled: true eda: disabled: true hub: disabled: false ## uncomment if using file storage for Content pod storage_type: file file_storage_storage_class: nfs-local-rwx file_storage_size: 10Gi # # AaaS Hub Settings # pulp_settings: # cache_enabled: false ## uncomment if using S3 storage for Content pod # storage_type: S3 # object_storage_s3_secret: example-galaxy-object-storage ## uncomment if using Azure storage for Content pod # storage_type: azure # object_storage_azure_secret: azure-secret-name lightspeed: disabled: false End state: * Automation controller disabled * * Event-Driven Ansible disabled * * Automation hub deployed and named: myaap-hub * Red Hat Ansible Lightspeed disabled", "--- apiVersion: aap.ansible.com/v1alpha1 kind: AnsibleAutomationPlatform metadata: name: myaap spec: # Development purposes only no_log: false controller: disabled: false eda: disabled: false hub: disabled: false ## uncomment if using file storage for Content pod storage_type: file file_storage_storage_class: nfs-local-rwx file_storage_size: 10Gi ## uncomment if using S3 storage for Content pod # storage_type: S3 # object_storage_s3_secret: example-galaxy-object-storage ## uncomment if using Azure storage for Content pod # storage_type: azure # object_storage_azure_secret: azure-secret-name lightspeed: disabled: false End state: * Automation controller deployed and named: myaap-controller * * Event-Driven Ansible deployed and named: myaap-eda * * Automation hub deployed and named: myaap-hub * Red Hat Ansible Lightspeed deployed and named: myaap-lightspeed", "--- apiVersion: aap.ansible.com/v1alpha1 kind: AnsibleAutomationPlatform metadata: name: myaap spec: # Development purposes only no_log: false controller: disabled: true eda: disabled: true hub: disabled: true lightspeed: disabled: true End state: * Platform gateway deployed and named: myaap-gateway * UI is reachable at: https://myaap-gateway-gateway.apps.ocp4.example.com * Automation controller is not deployed * * Event-Driven Ansible is not deployed * * Automation hub is not deployed * Red Hat Ansible Lightspeed is not deployed", "--- apiVersion: aap.ansible.com/v1alpha1 kind: AnsibleAutomationPlatform metadata: name: myaap spec: eda: extra_settings: - setting: EDA_MAX_RUNNING_ACTIVATIONS value: \"15\" # Setting this value to \"-1\" means there will be no limit" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html-single/installing_on_openshift_container_platform/index
33.5. Network Configuration
33.5. Network Configuration Figure 33.8. Network Configuration If the system to be installed via kickstart does not have an Ethernet card, do not configure one on the Network Configuration page. Networking is only required if you choose a networking-based installation method (NFS, FTP, or HTTP). Networking can always be configured after installation with the Network Administration Tool ( system-config-network ). Refer to the Red Hat Enterprise Linux Deployment Guide for details. For each Ethernet card on the system, click Add Network Device and select the network device and network type for the device. Select eth0 to configure the first Ethernet card, eth1 for the second Ethernet card, and so on.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/s1-redhat-config-kickstart-network
Chapter 23. Next steps
Chapter 23. steps Testing a decision service using test scenarios Packaging and deploying an Red Hat Process Automation Manager project
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_decision_services_in_red_hat_process_automation_manager/next_steps
Red Hat Data Grid
Red Hat Data Grid Data Grid is a high-performance, distributed in-memory data store. Schemaless data structure Flexibility to store different objects as key-value pairs. Grid-based data storage Designed to distribute and replicate data across clusters. Elastic scaling Dynamically adjust the number of nodes to meet demand without service disruption. Data interoperability Store, retrieve, and query data in the grid from different endpoints.
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/querying_data_grid_caches/red-hat-data-grid
3.2. Is Your Hardware Compatible?
3.2. Is Your Hardware Compatible? Hardware compatibility is particularly important if you have an older system or a system that you built yourself. Red Hat Enterprise Linux 6.9 should be compatible with most hardware in systems that were factory built within the last two years. However, hardware specifications change almost daily, so it is difficult to guarantee that your hardware is 100% compatible. One consistent requirement is your processor. Red Hat Enterprise Linux 6.9 supports, at minimum, all 32-bit and 64-bit implementations of Intel microarchitecture from P6 and onwards and AMD microarchitecture from Athlon and onwards. The most recent list of supported hardware can be found at:
[ "https://hardware.redhat.com/" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/sn-Is_Your_Hardware_Compatible-x86
19.2. Brick Configuration
19.2. Brick Configuration Format bricks using the following configurations to enhance performance: Procedure 19.1. Brick Configuration LVM layer The steps for creating a brick from a physical device is listed below. An outline of steps for creating multiple bricks on a physical device is listed as Example - Creating multiple bricks on a physical device below. Creating the Physical Volume The pvcreate command is used to create the physical volume. The Logical Volume Manager can use a portion of the physical volume for storing its metadata while the rest is used as the data portion.Align the I/O at the Logical Volume Manager (LVM) layer using --dataalignment option while creating the physical volume. The command is used in the following format: For JBOD, use an alignment value of 256K . In case of hardware RAID, the alignment_value should be obtained by multiplying the RAID stripe unit size with the number of data disks. If 12 disks are used in a RAID 6 configuration, the number of data disks is 10; on the other hand, if 12 disks are used in a RAID 10 configuration, the number of data disks is 6. For example, the following command is appropriate for 12 disks in a RAID 6 configuration with a stripe unit size of 128 KiB: The following command is appropriate for 12 disks in a RAID 10 configuration with a stripe unit size of 256 KiB: To view the previously configured physical volume settings for --dataalignment , run the following command: Creating the Volume Group The volume group is created using the vgcreate command. For hardware RAID, in order to ensure that logical volumes created in the volume group are aligned with the underlying RAID geometry, it is important to use the -- physicalextentsize option. Execute the vgcreate command in the following format: The extent_size should be obtained by multiplying the RAID stripe unit size with the number of data disks. If 12 disks are used in a RAID 6 configuration, the number of data disks is 10; on the other hand, if 12 disks are used in a RAID 10 configuration, the number of data disks is 6. For example, run the following command for RAID-6 storage with a stripe unit size of 128 KB, and 12 disks (10 data disks): In the case of JBOD, use the vgcreate command in the following format: Creating the Thin Pool A thin pool provides a common pool of storage for thin logical volumes (LVs) and their snapshot volumes, if any. Execute the following commands to create a thin pool of a specific size: You can also create a thin pool of the maximum possible size for your device by executing the following command: Recommended parameter values for thin pool creation poolmetadatasize Internally, a thin pool contains a separate metadata device that is used to track the (dynamically) allocated regions of the thin LVs and snapshots. The poolmetadatasize option in the above command refers to the size of the pool metadata device. The maximum possible size for a metadata LV is 16 GiB. Red Hat Gluster Storage recommends creating the metadata device of the maximum supported size. You can allocate less than the maximum if space is a concern, but in this case you should allocate a minimum of 0.5% of the pool size. Warning If your metadata pool runs out of space, you cannot create data. This includes the data required to increase the size of the metadata pool or to migrate data away from a volume that has run out of metadata space. Monitor your metadata pool using the lvs -o+metadata_percent command and ensure that it does not run out of space. chunksize An important parameter to be specified while creating a thin pool is the chunk size,which is the unit of allocation. For good performance, the chunk size for the thin pool and the parameters of the underlying hardware RAID storage should be chosen so that they work well together. For JBOD , use a thin pool chunk size of 256 KiB. For RAID 6 storage, the striping parameters should be chosen so that the full stripe size (stripe_unit size * number of data disks) is between 1 MiB and 2 MiB, preferably in the low end of the range. The thin pool chunk size should be chosen to match the RAID 6 full stripe size. Matching the chunk size to the full stripe size aligns thin pool allocations with RAID 6 stripes, which can lead to better performance. Limiting the chunk size to below 2 MiB helps reduce performance problems due to excessive copy-on-write when snapshots are used. For example, for RAID 6 with 12 disks (10 data disks), stripe unit size should be chosen as 128 KiB. This leads to a full stripe size of 1280 KiB (1.25 MiB). The thin pool should then be created with the chunk size of 1280 KiB. For RAID 10 storage, the preferred stripe unit size is 256 KiB. This can also serve as the thin pool chunk size. Note that RAID 10 is recommended when the workload has a large proportion of small file writes or random writes. In this case, a small thin pool chunk size is more appropriate, as it reduces copy-on-write overhead with snapshots. If the addressable storage on the device is smaller than the device itself, you need to adjust the recommended chunk size. Calculate the adjustment factor using the following formula: Round the adjustment factor up. Then calculate the new chunk size using the following: block zeroing By default, the newly provisioned chunks in a thin pool are zeroed to prevent data leaking between different block devices. In the case of Red Hat Gluster Storage, where data is accessed via a file system, this option can be turned off for better performance with the --zero n option. Note that n does not need to be replaced. The following example shows how to create the thin pool: You can also use --extents 100%FREE to ensure the thin pool takes up all available space once the metadata pool is created. The following example shows how to create a 2 TB thin pool: The following example creates a thin pool that takes up all remaining space once the metadata pool has been created. Creating a Thin Logical Volume After the thin pool has been created as mentioned above, a thinly provisioned logical volume can be created in the thin pool to serve as storage for a brick of a Red Hat Gluster Storage volume. Example - Creating multiple bricks on a physical device The steps above (LVM Layer) cover the case where a single brick is being created on a physical device. This example shows how to adapt these steps when multiple bricks need to be created on a physical device. Note In this following steps, we are assuming the following: Two bricks must be created on the same physical device One brick must be of size 4 TiB and the other is 2 TiB The device is /dev/sdb , and is a RAID-6 device with 12 disks The 12-disk RAID-6 device has been created according to the recommendations in this chapter, that is, with a stripe unit size of 128 KiB Create a single physical volume using pvcreate Create a single volume group on the device Create a separate thin pool for each brick using the following commands: In the examples above, the size of each thin pool is chosen to be the same as the size of the brick that will be created in it. With thin provisioning, there are many possible ways of managing space, and these options are not discussed in this chapter. Create a thin logical volume for each brick Follow the XFS Recommendations ( step) in this chapter for creating and mounting filesystems for each of the thin logical volumes XFS Recommendataions XFS Inode Size As Red Hat Gluster Storage makes extensive use of extended attributes, an XFS inode size of 512 bytes works better with Red Hat Gluster Storage than the default XFS inode size of 256 bytes. So, inode size for XFS must be set to 512 bytes while formatting the Red Hat Gluster Storage bricks. To set the inode size, you have to use -i size option with the mkfs.xfs command as shown in the following Logical Block Size for the Directory section. XFS RAID Alignment When creating an XFS file system, you can explicitly specify the striping parameters of the underlying storage in the following format: For RAID 6, ensure that I/O is aligned at the file system layer by providing the striping parameters. For RAID 6 storage with 12 disks, if the recommendations above have been followed, the values must be as following: For RAID 10 and JBOD, the -d su=<>,sw=<> option can be omitted. By default, XFS will use the thin-p chunk size and other parameters to make layout decisions. Logical Block Size for the Directory An XFS file system allows to select a logical block size for the file system directory that is greater than the logical block size of the file system. Increasing the logical block size for the directories from the default 4 K, decreases the directory I/O, which in turn improves the performance of directory operations. To set the block size, you need to use -n size option with the mkfs.xfs command as shown in the following example output. Following is the example output of RAID 6 configuration along with inode and block size options: Allocation Strategy inode32 and inode64 are two most common allocation strategies for XFS. With inode32 allocation strategy, XFS places all the inodes in the first 1 TiB of disk. With larger disk, all the inodes would be stuck in first 1 TiB. inode32 allocation strategy is used by default. With inode64 mount option inodes would be replaced near to the data which would be minimize the disk seeks. To set the allocation strategy to inode64 when file system is being mounted, you need to use -o inode64 option with the mount command as shown in the following Access Time section. Access Time If the application does not require to update the access time on files, than file system must always be mounted with noatime mount option. For example: This optimization improves performance of small-file reads by avoiding updates to the XFS inodes when files are read. Allocation groups Each XFS file system is partitioned into regions called allocation groups. Allocation groups are similar to the block groups in ext3, but allocation groups are much larger than block groups and are used for scalability and parallelism rather than disk locality. The default allocation for an allocation group is 1 TiB. Allocation group count must be large enough to sustain the concurrent allocation workload. In most of the cases allocation group count chosen by mkfs.xfs command would give the optimal performance. Do not change the allocation group count chosen by mkfs.xfs , while formatting the file system. Percentage of space allocation to inodes If the workload is very small files (average file size is less than 10 KB ), then it is recommended to set maxpct value to 10 , while formatting the file system. Also, maxpct value can be set upto 100 if needed for arbiter brick. Performance tuning option in Red Hat Gluster Storage A tuned profile is designed to improve performance for a specific use case by tuning system parameters appropriately. Red Hat Gluster Storage includes tuned profiles tailored for its workloads. These profiles are available in both Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7. Table 19.1. Recommended Profiles for Different Workloads Workload Profile Name Large-file, sequential I/O workloads rhgs-sequential-io Small-file workloads rhgs-random-io Random I/O workloads rhgs-random-io Earlier versions of Red Hat Gluster Storage on Red Hat Enterprise Linux 6 recommended tuned profiles rhs-high-throughput and rhs-virtualization . These profiles are still available on Red Hat Enterprise Linux 6. However, switching to the new profiles is recommended. Important Red Hat Gluster Storage is not supported on Red Hat Enterprise Linux 6 (RHEL 6) from 3.5 Batch Update 1 onwards. See Version Details table in section Red Hat Gluster Storage Software Components and Versions of the Installation Guide To apply tunings contained in the tuned profile, run the following command after creating a Red Hat Gluster Storage volume. For example: Writeback Caching For small-file and random write performance, we strongly recommend writeback cache, that is, non-volatile random-access memory (NVRAM) in your storage controller. For example, normal Dell and HP storage controllers have it. Ensure that NVRAM is enabled, that is, the battery is working. Refer your hardware documentation for details on enabling NVRAM. Do not enable writeback caching in the disk drives, this is a policy where the disk drive considers the write is complete before the write actually made it to the magnetic media (platter). As a result, the disk write cache might lose its data during a power failure or even loss of metadata leading to file system corruption. 19.2.1. Many Bricks per Node By default, for every brick configured on a Red Hat Gluster Storage server node, one process is created and one port is consumed. If you have a large number of bricks configured on a single server, enabling brick multiplexing reduces port and memory consumption by allowing compatible bricks to use the same process and port. Red Hat recommends restarting all volumes after enabling or disabling brick multiplexing. As of Red Hat Gluster Storage 3.4, brick multiplexing is supported only for OpenShift Container Storage use cases. Configuring Brick Multiplexing Set cluster.brick-multiplex to on . This option affects all volumes. Restart all volumes for brick multiplexing to take effect. Important Brick compatibility is determined when the volume starts, and depends on volume options shared between bricks. When brick multiplexing is enabled, Red Hat recommends restarting the volume whenever any volume configuration details are changed in order to maintain the compatibility of the bricks grouped under a single process. 19.2.2. Port Range Configuration By default, for every brick configured on a Red Hat Gluster Storage server node, one process is created and one port is consumed. If you have a large number of bricks configured on a single server, configuring port range lets you control the range of ports allocated by glusterd to newly created or existing bricks and volumes. This can be achieved with the help of the glusterd.vol file. The base-port and max-port options can be used to set the port range. By default, base-port is set to 49152, and max-port is set to 60999. Important If glusterd runs out of free ports to allocate within the specified range of base-port and max-port , newer bricks and volumes fail to start. Configuring Port Range Edit the glusterd.vol file on all the nodes. Remove the comment marker # corresponding to the base-port and max-port options. Define the port number in the base-port , and max-port options. Save the glusterd.vol file and restart the glusterd service on each Red Hat Gluster Storage node.
[ "pvcreate --dataalignment alignment_value disk", "pvcreate --dataalignment 1280k disk", "pvcreate --dataalignment 1536k disk", "pvs -o +pe_start /dev/sdb PV VG Fmt Attr PSize PFree 1st PE /dev/sdb lvm2 a-- 9.09t 9.09t 1.25m", "vgcreate --physicalextentsize extent_size VOLGROUP physical_volume", "vgcreate --physicalextentsize 1280k VOLGROUP physical_volume", "vgcreate VOLGROUP physical_volume", "lvcreate --thin VOLGROUP / POOLNAME --size POOLSIZE --chunksize CHUNKSIZE --poolmetadatasize METASIZE --zero n", "lvcreate --thin VOLGROUP / POOLNAME --extents 100%FREE --chunksize CHUNKSIZE --poolmetadatasize METASIZE --zero n", "adjustment_factor = device_size_in_tb / (preferred_chunk_size_in_kb * 4 / 64 )", "chunk_size = preferred_chunk_size * rounded_adjustment_factor", "lvcreate --thin VOLGROUP/thin_pool --size 2T --chunksize 1280k --poolmetadatasize 16G --zero n", "lvcreate --thin VOLGROUP/thin_pool --extents 100%FREE --chunksize 1280k --poolmetadatasize 16G --zero n", "lvcreate --thin VOLGROUP/thin_pool --size 2T --chunksize 1280k --poolmetadatasize 16G --zero n", "lvcreate --thin VOLGROUP/thin_pool --extents 100%FREE --chunksize 1280k --poolmetadatasize 16G --zero n", "lvcreate --thin --name LV_name --virtualsize LV_size VOLGROUP/thin_pool", "pvcreate --dataalignment 1280k /dev/sdb", "vgcreate --physicalextentsize 1280k vg1 /dev/sdb", "lvcreate --thin vg1/thin_pool_1 --size 4T --chunksize 1280K --poolmetadatasize 16G --zero n", "lvcreate --thin vg1/thin_pool_2 --size 2T --chunksize 1280K --poolmetadatasize 16G --zero n", "lvcreate --thin --name lv1 --virtualsize 4T vg1/thin_pool_1", "lvcreate --thin --name lv2 --virtualsize 2T vg1/thin_pool_2", "mkfs.xfs options /dev/vg1/lv1", "mkfs.xfs options /dev/vg1/lv2", "mount options /dev/vg1/lv1 mount_point_1", "mount options /dev/vg1/lv2 mount_point_2", "mkfs.xfs other_options -d su= stripe_unit_size ,sw= stripe_width_in_number_of_disks device", "mkfs.xfs other_options -d su=128k,sw=10 device", "mkfs.xfs -f -i size=512 -n size=8192 -d su=128k,sw=10 logical volume meta-data=/dev/mapper/gluster-brick1 isize=512 agcount=32, agsize=37748736 blks = sectsz=512 attr=2, projid32bit=0 data = bsize=4096 blocks=1207959552, imaxpct=5 = sunit=32 swidth=320 blks naming = version 2 bsize=8192 ascii-ci=0 log =internal log bsize=4096 blocks=521728, version=2 = sectsz=512 sunit=32 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0", "mount -t xfs -o inode64,noatime < logical volume > < mount point >", "/etc/fstab entry for option E + F <logical volume> <mount point>xfs inode64,noatime 0 0", "tuned-adm profile profile-name", "tuned-adm profile rhgs-sequential-io", "gluster volume set all cluster.brick-multiplex on", "gluster volume stop VOLNAME gluster volume start VOLNAME", "vi /etc/glusterfs/glusterd.vol", "volume management type mgmt/glusterd option working-directory /var/lib/glusterd option transport-type socket,rdma option transport.socket.keepalive-time 10 option transport.socket.keepalive-interval 2 option transport.socket.read-fail-log off option ping-timeout 0 option event-threads 1 option lock-timer 180 option transport.address-family inet6 option base-port 49152 option max-port 60999 end-volume", "option base-port 49152 option max-port 60999" ]
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/Brick_Configuration
E.9. Changing Runlevels at Boot Time
E.9. Changing Runlevels at Boot Time Under Red Hat Enterprise Linux, it is possible to change the default runlevel at boot time. To change the runlevel of a single boot session, use the following instructions: When the GRUB menu bypass screen appears at boot time, press any key to enter the GRUB menu (within the first three seconds). Press the a key to append to the kernel command. Add <space> <runlevel> at the end of the boot options line to boot to the desired runlevel. For example, the following entry would initiate a boot process into runlevel 3:
[ "grub append> ro root=/dev/VolGroup00/LogVol00 rhgb quiet 3" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/s1-grub-runlevels
Chapter 12. Using a service account as an OAuth client
Chapter 12. Using a service account as an OAuth client 12.1. Service accounts as OAuth clients You can use a service account as a constrained form of OAuth client. Service accounts can request only a subset of scopes that allow access to some basic user information and role-based power inside of the service account's own namespace: user:info user:check-access role:<any_role>:<service_account_namespace> role:<any_role>:<service_account_namespace>:! When using a service account as an OAuth client: client_id is system:serviceaccount:<service_account_namespace>:<service_account_name> . client_secret can be any of the API tokens for that service account. For example: USD oc sa get-token <service_account_name> To get WWW-Authenticate challenges, set an serviceaccounts.openshift.io/oauth-want-challenges annotation on the service account to true . redirect_uri must match an annotation on the service account. 12.1.1. Redirect URIs for service accounts as OAuth clients Annotation keys must have the prefix serviceaccounts.openshift.io/oauth-redirecturi. or serviceaccounts.openshift.io/oauth-redirectreference. such as: In its simplest form, the annotation can be used to directly specify valid redirect URIs. For example: The first and second postfixes in the above example are used to separate the two valid redirect URIs. In more complex configurations, static redirect URIs may not be enough. For example, perhaps you want all Ingresses for a route to be considered valid. This is where dynamic redirect URIs via the serviceaccounts.openshift.io/oauth-redirectreference. prefix come into play. For example: Since the value for this annotation contains serialized JSON data, it is easier to see in an expanded format: { "kind": "OAuthRedirectReference", "apiVersion": "v1", "reference": { "kind": "Route", "name": "jenkins" } } Now you can see that an OAuthRedirectReference allows us to reference the route named jenkins . Thus, all Ingresses for that route will now be considered valid. The full specification for an OAuthRedirectReference is: { "kind": "OAuthRedirectReference", "apiVersion": "v1", "reference": { "kind": ..., 1 "name": ..., 2 "group": ... 3 } } 1 kind refers to the type of the object being referenced. Currently, only route is supported. 2 name refers to the name of the object. The object must be in the same namespace as the service account. 3 group refers to the group of the object. Leave this blank, as the group for a route is the empty string. Both annotation prefixes can be combined to override the data provided by the reference object. For example: The first postfix is used to tie the annotations together. Assuming that the jenkins route had an Ingress of https://example.com , now https://example.com/custompath is considered valid, but https://example.com is not. The format for partially supplying override data is as follows: Type Syntax Scheme "https://" Hostname "//website.com" Port "//:8000" Path "examplepath" Note Specifying a hostname override will replace the hostname data from the referenced object, which is not likely to be desired behavior. Any combination of the above syntax can be combined using the following format: <scheme:>//<hostname><:port>/<path> The same object can be referenced more than once for more flexibility: Assuming that the route named jenkins has an Ingress of https://example.com , then both https://example.com:8000 and https://example.com/custompath are considered valid. Static and dynamic annotations can be used at the same time to achieve the desired behavior:
[ "oc sa get-token <service_account_name>", "serviceaccounts.openshift.io/oauth-redirecturi.<name>", "\"serviceaccounts.openshift.io/oauth-redirecturi.first\": \"https://example.com\" \"serviceaccounts.openshift.io/oauth-redirecturi.second\": \"https://other.com\"", "\"serviceaccounts.openshift.io/oauth-redirectreference.first\": \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"jenkins\\\"}}\"", "{ \"kind\": \"OAuthRedirectReference\", \"apiVersion\": \"v1\", \"reference\": { \"kind\": \"Route\", \"name\": \"jenkins\" } }", "{ \"kind\": \"OAuthRedirectReference\", \"apiVersion\": \"v1\", \"reference\": { \"kind\": ..., 1 \"name\": ..., 2 \"group\": ... 3 } }", "\"serviceaccounts.openshift.io/oauth-redirecturi.first\": \"custompath\" \"serviceaccounts.openshift.io/oauth-redirectreference.first\": \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"jenkins\\\"}}\"", "\"serviceaccounts.openshift.io/oauth-redirecturi.first\": \"custompath\" \"serviceaccounts.openshift.io/oauth-redirectreference.first\": \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"jenkins\\\"}}\" \"serviceaccounts.openshift.io/oauth-redirecturi.second\": \"//:8000\" \"serviceaccounts.openshift.io/oauth-redirectreference.second\": \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"jenkins\\\"}}\"", "\"serviceaccounts.openshift.io/oauth-redirectreference.first\": \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"jenkins\\\"}}\" \"serviceaccounts.openshift.io/oauth-redirecturi.second\": \"https://other.com\"" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/authentication_and_authorization/using-service-accounts-as-oauth-client
Chapter 6. Creating Ansible playbooks with the all-in-one Red Hat OpenStack Platform environment
Chapter 6. Creating Ansible playbooks with the all-in-one Red Hat OpenStack Platform environment The deployment command applies Ansible playbooks to the environment automatically. However, you can modify the deployment command to generate Ansible playbooks without applying them to the deployment, and run the playbooks later. Include the --output-only option in the deploy command to generate the undercloud-ansible-XXXXX directory. This directory contains a set of Ansible playbooks that you can run on other hosts. To generate the Ansible playbook directory, run the deploy command with the option --output-only : To run the Ansible playbooks, run the ansible-playbook command, and include the inventory.yaml file and the deploy_steps_playbook.yaml file:
[ "[stack@all-in-one]USD sudo openstack tripleo deploy --templates --local-ip=USDIP/USDNETMASK -e /usr/share/openstack-tripleo-heat-templates/environments/standalone/standalone-tripleo.yaml -r /usr/share/openstack-tripleo-heat-templates/roles/Standalone.yaml -e USDHOME/containers-prepare-parameters.yaml -e USDHOME/standalone_parameters.yaml --output-dir USDHOME --standalone --output-only", "[stack@all-in-one]USD cd undercloud-ansible-XXXXX [stack@all-in-one]USD sudo ansible-playbook -i inventory.yaml deploy_steps_playbook.yaml" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/quick_start_guide/creating-ansible-playbooks
Getting Started
Getting Started Red Hat 3scale API Management 2.15 Getting started with your 3scale API Management installation. Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_3scale_api_management/2.15/html/getting_started/index
Chapter 2. Installing a cluster on vSphere
Chapter 2. Installing a cluster on vSphere In OpenShift Container Platform version 4.13, you can install a cluster on your VMware vSphere instance by using installer-provisioned infrastructure. Note OpenShift Container Platform supports deploying a cluster to a single VMware vCenter only. Deploying a cluster with machines/machine sets on multiple vCenters is not supported. 2.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You provisioned persistent storage for your cluster. To deploy a private image registry, your storage must provide ReadWriteMany access modes. The OpenShift Container Platform installer requires access to port 443 on the vCenter and ESXi hosts. You verified that port 443 is accessible. If you use a firewall, you confirmed with the administrator that port 443 is accessible. Control plane nodes must be able to reach vCenter and ESXi hosts on port 443 for the installation to succeed. If you use a firewall, you configured it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 2.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.13, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 2.3. VMware vSphere infrastructure requirements You must install an OpenShift Container Platform cluster on one of the following versions of a VMware vSphere instance that meets the requirements for the components that you use: Version 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later Version 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later You can host the VMware vSphere infrastructure on-premise or on a VMware Cloud Verified provider that meets the requirements outlined in the following table: Table 2.1. Version requirements for vSphere virtual environments Virtual environment product Required version VMware virtual hardware 15 or later vSphere ESXi hosts 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later vCenter host 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later Table 2.2. Minimum supported vSphere version for VMware components Component Minimum supported versions Description Hypervisor vSphere 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; vSphere 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later with virtual hardware version 15 This hypervisor version is the minimum version that Red Hat Enterprise Linux CoreOS (RHCOS) supports. For more information about supported hardware on the latest version of Red Hat Enterprise Linux (RHEL) that is compatible with RHCOS, see Hardware on the Red Hat Customer Portal. Storage with in-tree drivers vSphere 7.0 Update 2 and later; 8.0 Update 1 or later This plugin creates vSphere storage by using the in-tree storage drivers for vSphere included in OpenShift Container Platform. Optional: Networking (NSX-T) vSphere 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; vSphere 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later For more information about the compatibility of NSX and OpenShift Container Platform, see the Release Notes section of VMware's NSX container plugin documentation . CPU micro-architecture x86-64-v2 or higher OpenShift 4.13 and later are based on RHEL 9.2 host operating system which raised the microarchitecture requirements to x86-64-v2. See the RHEL Microarchitecture requirements documentation . You can verify compatibility by following the procedures outlined in this KCS article . Important You must ensure that the time on your ESXi hosts is synchronized before you install OpenShift Container Platform. See Edit Time Configuration for a Host in the VMware documentation. Additional resources For more information about CSI automatic migration, see "Overview" in VMware vSphere CSI Driver Operator . 2.4. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Review the following details about the required network ports. Table 2.3. Ports used for all-machine to all-machine communications Protocol Port Description VRRP N/A Required for keepalived ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 virtual extensible LAN (VXLAN) 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 2.4. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 2.5. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports 2.5. VMware vSphere CSI Driver Operator requirements To install the vSphere CSI Driver Operator, the following requirements must be met: VMware vSphere version: 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later vCenter version: 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later Virtual machines of hardware version 15 or later No third-party vSphere CSI driver already installed in the cluster If a third-party vSphere CSI driver is present in the cluster, OpenShift Container Platform does not overwrite it. The presence of a third-party vSphere CSI driver prevents OpenShift Container Platform from updating to OpenShift Container Platform 4.13 or later. Note The VMware vSphere CSI Driver Operator is supported only on clusters deployed with platform: vsphere in the installation manifest. Additional resources To remove a third-party vSphere CSI driver, see Removing a third-party vSphere CSI Driver . To update the hardware version for your vSphere nodes, see Updating hardware on nodes running in vSphere . 2.6. vCenter requirements Before you install an OpenShift Container Platform cluster on your vCenter that uses infrastructure that the installer provisions, you must prepare your environment. Required vCenter account privileges To install an OpenShift Container Platform cluster in a vCenter, the installation program requires access to an account with privileges to read and create the required resources. Using an account that has global administrative privileges is the simplest way to access all of the necessary permissions. If you cannot use an account with global administrative privileges, you must create roles to grant the privileges necessary for OpenShift Container Platform cluster installation. While most of the privileges are always required, some are required only if you plan for the installation program to provision a folder to contain the OpenShift Container Platform cluster on your vCenter instance, which is the default behavior. You must create or amend vSphere roles for the specified objects to grant the required privileges. An additional role is required if the installation program is to create a vSphere virtual machine folder. Example 2.1. Roles and privileges required for installation in vSphere API vSphere object for role When required Required privileges in vSphere API vSphere vCenter Always Cns.Searchable InventoryService.Tagging.AttachTag InventoryService.Tagging.CreateCategory InventoryService.Tagging.CreateTag InventoryService.Tagging.DeleteCategory InventoryService.Tagging.DeleteTag InventoryService.Tagging.EditCategory InventoryService.Tagging.EditTag Sessions.ValidateSession StorageProfile.Update StorageProfile.View vSphere vCenter Cluster If VMs will be created in the cluster root Host.Config.Storage Resource.AssignVMToPool VApp.AssignResourcePool VApp.Import VirtualMachine.Config.AddNewDisk vSphere vCenter Resource Pool If an existing resource pool is provided Host.Config.Storage Resource.AssignVMToPool VApp.AssignResourcePool VApp.Import VirtualMachine.Config.AddNewDisk vSphere Datastore Always Datastore.AllocateSpace Datastore.Browse Datastore.FileManagement InventoryService.Tagging.ObjectAttachable vSphere Port Group Always Network.Assign Virtual Machine Folder Always InventoryService.Tagging.ObjectAttachable Resource.AssignVMToPool VApp.Import VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.DiskLease VirtualMachine.Config.EditDevice VirtualMachine.Config.Memory VirtualMachine.Config.RemoveDisk VirtualMachine.Config.Rename VirtualMachine.Config.ResetGuestInfo VirtualMachine.Config.Resource VirtualMachine.Config.Settings VirtualMachine.Config.UpgradeVirtualHardware VirtualMachine.Interact.GuestControl VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Interact.Reset VirtualMachine.Inventory.Create VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone VirtualMachine.Provisioning.MarkAsTemplate VirtualMachine.Provisioning.DeployTemplate vSphere vCenter Datacenter If the installation program creates the virtual machine folder. For UPI, VirtualMachine.Inventory.Create and VirtualMachine.Inventory.Delete privileges are optional if your cluster does not use the Machine API. InventoryService.Tagging.ObjectAttachable Resource.AssignVMToPool VApp.Import VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.DiskLease VirtualMachine.Config.EditDevice VirtualMachine.Config.Memory VirtualMachine.Config.RemoveDisk VirtualMachine.Config.Rename VirtualMachine.Config.ResetGuestInfo VirtualMachine.Config.Resource VirtualMachine.Config.Settings VirtualMachine.Config.UpgradeVirtualHardware VirtualMachine.Interact.GuestControl VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Interact.Reset VirtualMachine.Inventory.Create VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone VirtualMachine.Provisioning.DeployTemplate VirtualMachine.Provisioning.MarkAsTemplate Folder.Create Folder.Delete Example 2.2. Roles and privileges required for installation in vCenter graphical user interface (GUI) vSphere object for role When required Required privileges in vCenter GUI vSphere vCenter Always Cns.Searchable "vSphere Tagging"."Assign or Unassign vSphere Tag" "vSphere Tagging"."Create vSphere Tag Category" "vSphere Tagging"."Create vSphere Tag" vSphere Tagging"."Delete vSphere Tag Category" "vSphere Tagging"."Delete vSphere Tag" "vSphere Tagging"."Edit vSphere Tag Category" "vSphere Tagging"."Edit vSphere Tag" Sessions."Validate session" "Profile-driven storage"."Profile-driven storage update" "Profile-driven storage"."Profile-driven storage view" vSphere vCenter Cluster If VMs will be created in the cluster root Host.Configuration."Storage partition configuration" Resource."Assign virtual machine to resource pool" VApp."Assign resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add new disk" vSphere vCenter Resource Pool If an existing resource pool is provided Host.Configuration."Storage partition configuration" Resource."Assign virtual machine to resource pool" VApp."Assign resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add new disk" vSphere Datastore Always Datastore."Allocate space" Datastore."Browse datastore" Datastore."Low level file operations" "vSphere Tagging"."Assign or Unassign vSphere Tag on Object" vSphere Port Group Always Network."Assign network" Virtual Machine Folder Always "vSphere Tagging"."Assign or Unassign vSphere Tag on Object" Resource."Assign virtual machine to resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add existing disk" "Virtual machine"."Change Configuration"."Add new disk" "Virtual machine"."Change Configuration"."Add or remove device" "Virtual machine"."Change Configuration"."Advanced configuration" "Virtual machine"."Change Configuration"."Set annotation" "Virtual machine"."Change Configuration"."Change CPU count" "Virtual machine"."Change Configuration"."Extend virtual disk" "Virtual machine"."Change Configuration"."Acquire disk lease" "Virtual machine"."Change Configuration"."Modify device settings" "Virtual machine"."Change Configuration"."Change Memory" "Virtual machine"."Change Configuration"."Remove disk" "Virtual machine"."Change Configuration".Rename "Virtual machine"."Change Configuration"."Reset guest information" "Virtual machine"."Change Configuration"."Change resource" "Virtual machine"."Change Configuration"."Change Settings" "Virtual machine"."Change Configuration"."Upgrade virtual machine compatibility" "Virtual machine".Interaction."Guest operating system management by VIX API" "Virtual machine".Interaction."Power off" "Virtual machine".Interaction."Power on" "Virtual machine".Interaction.Reset "Virtual machine"."Edit Inventory"."Create new" "Virtual machine"."Edit Inventory"."Create from existing" "Virtual machine"."Edit Inventory"."Remove" "Virtual machine".Provisioning."Clone virtual machine" "Virtual machine".Provisioning."Mark as template" "Virtual machine".Provisioning."Deploy template" vSphere vCenter Datacenter If the installation program creates the virtual machine folder. For UPI, VirtualMachine.Inventory.Create and VirtualMachine.Inventory.Delete privileges are optional if your cluster does not use the Machine API. "vSphere Tagging"."Assign or Unassign vSphere Tag on Object" Resource."Assign virtual machine to resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add existing disk" "Virtual machine"."Change Configuration"."Add new disk" "Virtual machine"."Change Configuration"."Add or remove device" "Virtual machine"."Change Configuration"."Advanced configuration" "Virtual machine"."Change Configuration"."Set annotation" "Virtual machine"."Change Configuration"."Change CPU count" "Virtual machine"."Change Configuration"."Extend virtual disk" "Virtual machine"."Change Configuration"."Acquire disk lease" "Virtual machine"."Change Configuration"."Modify device settings" "Virtual machine"."Change Configuration"."Change Memory" "Virtual machine"."Change Configuration"."Remove disk" "Virtual machine"."Change Configuration".Rename "Virtual machine"."Change Configuration"."Reset guest information" "Virtual machine"."Change Configuration"."Change resource" "Virtual machine"."Change Configuration"."Change Settings" "Virtual machine"."Change Configuration"."Upgrade virtual machine compatibility" "Virtual machine".Interaction."Guest operating system management by VIX API" "Virtual machine".Interaction."Power off" "Virtual machine".Interaction."Power on" "Virtual machine".Interaction.Reset "Virtual machine"."Edit Inventory"."Create new" "Virtual machine"."Edit Inventory"."Create from existing" "Virtual machine"."Edit Inventory"."Remove" "Virtual machine".Provisioning."Clone virtual machine" "Virtual machine".Provisioning."Deploy template" "Virtual machine".Provisioning."Mark as template" Folder."Create folder" Folder."Delete folder" Additionally, the user requires some ReadOnly permissions, and some of the roles require permission to propogate the permissions to child objects. These settings vary depending on whether or not you install the cluster into an existing folder. Example 2.3. Required permissions and propagation settings vSphere object When required Propagate to children Permissions required vSphere vCenter Always False Listed required privileges vSphere vCenter Datacenter Existing folder False ReadOnly permission Installation program creates the folder True Listed required privileges vSphere vCenter Cluster Existing resource pool False ReadOnly permission VMs in cluster root True Listed required privileges vSphere vCenter Datastore Always False Listed required privileges vSphere Switch Always False ReadOnly permission vSphere Port Group Always False Listed required privileges vSphere vCenter Virtual Machine Folder Existing folder True Listed required privileges vSphere vCenter Resource Pool Existing resource pool True Listed required privileges For more information about creating an account with only the required privileges, see vSphere Permissions and User Management Tasks in the vSphere documentation. Using OpenShift Container Platform with vMotion If you intend on using vMotion in your vSphere environment, consider the following before installing an OpenShift Container Platform cluster. Using Storage vMotion can cause issues and is not supported. Using VMware compute vMotion to migrate the workloads for both OpenShift Container Platform compute machines and control plane machines is generally supported, where generally implies that you meet all VMware best practices for vMotion. To help ensure the uptime of your compute and control plane nodes, ensure that you follow the VMware best practices for vMotion, and use VMware anti-affinity rules to improve the availability of OpenShift Container Platform during maintenance or hardware issues. For more information about vMotion and anti-affinity rules, see the VMware vSphere documentation for vMotion networking requirements and VM anti-affinity rules . If you are using VMware vSphere volumes in your pods, migrating a VM across datastores, either manually or through Storage vMotion, causes invalid references within OpenShift Container Platform persistent volume (PV) objects that can result in data loss. OpenShift Container Platform does not support selective migration of VMDKs across datastores, using datastore clusters for VM provisioning or for dynamic or static provisioning of PVs, or using a datastore that is part of a datastore cluster for dynamic or static provisioning of PVs. Cluster resources When you deploy an OpenShift Container Platform cluster that uses installer-provisioned infrastructure, the installation program must be able to create several resources in your vCenter instance. A standard OpenShift Container Platform installation creates the following vCenter resources: 1 Folder 1 Tag category 1 Tag Virtual machines: 1 template 1 temporary bootstrap node 3 control plane nodes 3 compute machines Although these resources use 856 GB of storage, the bootstrap node is destroyed during the cluster installation process. A minimum of 800 GB of storage is required to use a standard cluster. If you deploy more compute machines, the OpenShift Container Platform cluster will use more storage. Cluster limits Available resources vary between clusters. The number of possible clusters within a vCenter is limited primarily by available storage space and any limitations on the number of required resources. Be sure to consider both limitations to the vCenter resources that the cluster creates and the resources that you require to deploy a cluster, such as IP addresses and networks. Networking requirements You can use Dynamic Host Configuration Protocol (DHCP) for the network and configure the DHCP server to set persistent IP addresses to machines in your cluster. In the DHCP lease, you must configure the DHCP to use the default gateway. Note You do not need to use the DHCP for the network if you want to provision nodes with static IP addresses. If you are installing to a restricted environment, the VM in your restricted network must have access to vCenter so that it can provision and manage nodes, persistent volume claims (PVCs), and other resources. Note Ensure that each OpenShift Container Platform node in the cluster has access to a Network Time Protocol (NTP) server that is discoverable by DHCP. Installation is possible without an NTP server. However, asynchronous server clocks can cause errors, which the NTP server prevents. Additionally, you must create the following networking resources before you install the OpenShift Container Platform cluster: Required IP Addresses An installer-provisioned vSphere installation requires two static IP addresses: The API address is used to access the cluster API. The Ingress address is used for cluster ingress traffic. You must provide these IP addresses to the installation program when you install the OpenShift Container Platform cluster. DNS records You must create DNS records for two static IP addresses in the appropriate DNS server for the vCenter instance that hosts your OpenShift Container Platform cluster. In each record, <cluster_name> is the cluster name and <base_domain> is the cluster base domain that you specify when you install the cluster. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 2.6. Required DNS records Component Record Description API VIP api.<cluster_name>.<base_domain>. This DNS A/AAAA or CNAME record must point to the load balancer for the control plane machines. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. Ingress VIP *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that points to the load balancer that targets the machines that run the Ingress router pods, which are the worker nodes by default. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. 2.7. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 2.8. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a machine that runs Linux, for example Red Hat Enterprise Linux 8, with 500 MB of local disk space. Important If you attempt to run the installation program on macOS, a known issue related to the golang compiler causes the installation of the OpenShift Container Platform cluster to fail. For more information about this issue, see the section named "Known Issues" in the OpenShift Container Platform 4.13 release notes document. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 2.9. Adding vCenter root CA certificates to your system trust Because the installation program requires access to your vCenter's API, you must add your vCenter's trusted root CA certificates to your system trust before you install an OpenShift Container Platform cluster. Procedure From the vCenter home page, download the vCenter's root CA certificates. Click Download trusted root CA certificates in the vSphere Web Services SDK section. The <vCenter>/certs/download.zip file downloads. Extract the compressed file that contains the vCenter root CA certificates. The contents of the compressed file resemble the following file structure: Add the files for your operating system to the system trust. For example, on a Fedora operating system, run the following command: # cp certs/lin/* /etc/pki/ca-trust/source/anchors Update your system trust. For example, on a Fedora operating system, run the following command: # update-ca-trust extract 2.10. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Verify the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Optional: Before you create the cluster, configure an external load balancer in place of the default load balancer. Important You do not need to specify API and Ingress static addresses for your installation program. If you choose this configuration, you must take additional actions to define network targets that accept an IP address from each referenced vSphere subnet. See the section "Configuring an external load balancer". Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. 2 To view different installation details, specify warn , debug , or error instead of info . When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Provide values at the prompts: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select vsphere as the platform to target. Specify the name of your vCenter instance. Specify the user name and password for the vCenter account that has the required permissions to create the cluster. The installation program connects to your vCenter instance. Important Some VMware vCenter Single Sign-On (SSO) environments with Active Directory (AD) integration might primarily require you to use the traditional login method, which requires the <domain>\ construct. To ensure that vCenter account permission checks complete properly, consider using the User Principal Name (UPN) login method, such as <username>@<fully_qualified_domainname> . Select the data center in your vCenter instance to connect to. Select the default vCenter datastore to use. Note Datastore and cluster names cannot exceed 60 characters; therefore, ensure the combined string length does not exceed the 60 character limit. Select the vCenter cluster to install the OpenShift Container Platform cluster in. The installation program uses the root resource pool of the vSphere cluster as the default resource pool. Select the network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. Enter the virtual IP address that you configured for control plane API access. Enter the virtual IP address that you configured for cluster ingress. Enter the base domain. This base domain must be the same one that you used in the DNS records that you configured. Enter a descriptive name for your cluster. The cluster name must be the same one that you used in the DNS records that you configured. Note Datastore and cluster names cannot exceed 60 characters; therefore, ensure the combined string length does not exceed the 60 character limit. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 2.11. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.13. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.13 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 2.12. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 2.13. Creating registry storage After you install the cluster, you must create storage for the registry Operator. 2.13.1. Image registry removed during installation On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed . This allows openshift-installer to complete installations on these platform types. After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed . When this has completed, you must configure storage. 2.13.2. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 2.13.2.1. Configuring registry storage for VMware vSphere As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites Cluster administrator permissions. A cluster on VMware vSphere. Persistent storage provisioned for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have "100Gi" capacity. Important Testing shows issues with using the NFS server on RHEL as storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended. Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resourses found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: 1 1 Leave the claim field blank to allow the automatic creation of an image-registry-storage persistent volume claim (PVC). The PVC is generated based on the default storage class. However, be aware that the default storage class might provide ReadWriteOnce (RWO) volumes, such as a RADOS Block Device (RBD), which can cause issues when you replicate to more than one replica. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m 2.13.2.2. Configuring block registry storage for VMware vSphere To allow the image registry to use block storage types such as vSphere Virtual Machine Disk (VMDK) during upgrades as a cluster administrator, you can use the Recreate rollout strategy. Important Block storage volumes are supported but not recommended for use with image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica. Procedure Enter the following command to set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy, and runs with only 1 replica: USD oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4 1 A unique name that represents the PersistentVolumeClaim object. 2 The namespace for the PersistentVolumeClaim object, which is openshift-image-registry . 3 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 4 The size of the persistent volume claim. Enter the following command to create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml -n openshift-image-registry Enter the following command to edit the registry configuration so that it references the correct PVC: USD oc edit config.imageregistry.operator.openshift.io -o yaml Example output storage: pvc: claim: 1 1 By creating a custom PVC, you can leave the claim field blank for the default automatic creation of an image-registry-storage PVC. For instructions about configuring registry storage so that it references the correct PVC, see Configuring the registry for vSphere . 2.14. Backing up VMware vSphere volumes OpenShift Container Platform provisions new volumes as independent persistent disks to freely attach and detach the volume on any node in the cluster. As a consequence, it is not possible to back up volumes that use snapshots, or to restore volumes from snapshots. See Snapshot Limitations for more information. Procedure To create a backup of persistent volumes: Stop the application that is using the persistent volume. Clone the persistent volume. Restart the application. Create a backup of the cloned volume. Delete the cloned volume. 2.15. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.13, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 2.16. steps Customize your cluster . If necessary, you can opt out of remote health reporting . Set up your registry and configure registry storage . Optional: View the events from the vSphere Problem Detector Operator to determine if the cluster has permission or storage configuration issues.
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "certs β”œβ”€β”€ lin β”‚ β”œβ”€β”€ 108f4d17.0 β”‚ β”œβ”€β”€ 108f4d17.r1 β”‚ β”œβ”€β”€ 7e757f6a.0 β”‚ β”œβ”€β”€ 8e4f8471.0 β”‚ └── 8e4f8471.r0 β”œβ”€β”€ mac β”‚ β”œβ”€β”€ 108f4d17.0 β”‚ β”œβ”€β”€ 108f4d17.r1 β”‚ β”œβ”€β”€ 7e757f6a.0 β”‚ β”œβ”€β”€ 8e4f8471.0 β”‚ └── 8e4f8471.r0 └── win β”œβ”€β”€ 108f4d17.0.crt β”œβ”€β”€ 108f4d17.r1.crl β”œβ”€β”€ 7e757f6a.0.crt β”œβ”€β”€ 8e4f8471.0.crt └── 8e4f8471.r0.crl 3 directories, 15 files", "cp certs/lin/* /etc/pki/ca-trust/source/anchors", "update-ca-trust extract", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resourses found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim: 1", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m", "oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4", "oc create -f pvc.yaml -n openshift-image-registry", "oc edit config.imageregistry.operator.openshift.io -o yaml", "storage: pvc: claim: 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_on_vsphere/installing-vsphere-installer-provisioned
5.2. Creating Cross-forest Trusts
5.2. Creating Cross-forest Trusts 5.2.1. Environment and Machine Requirements Before configuring a trust agreement, make sure that both the Active Directory and Identity Management servers, machines, and environments meet the requirements and settings described in this section. 5.2.1.1. Supported Windows Platforms You can establish a trust relationship with Active Directory forests that use the following forest and domain functional levels: Forest functional level range: Windows Server 2008 - Windows Server 2016 Domain functional level range: Windows Server 2008 - Windows Server 2016 The following operating systems are supported and tested for establishing a trust using the mentioned functional levels: Windows Server 2012 R2 Windows Server 2016 versions of Windows Server are not supported for establishing a trust. 5.2.1.2. DNS and Realm Settings To establish a trust, Active Directory and Identity Management require specific DNS configuration: Unique primary DNS domains Each system must have its own unique primary DNS domain configured. For example: ad.example.com for AD and idm.example.com for IdM example.com for AD and idm.example.com for IdM ad.example.com for AD and example.com for IdM Important If the IdM domain is the parent domain of the AD domain, the IdM servers must run on Red Hat Enterprise Linux 7.5 or later. The most convenient management solution is an environment where each DNS domain is managed by integrated DNS servers, but it is possible to use any other standard-compliant DNS server as well. It is not possible for AD or IdM to share the primary DNS domain with another system for identity management. For more information, see documentation for host name and DNS configuration requirements in the Linux Domain Identity, Authentication, and Policy Guide . Kerberos realm names as upper-case versions of primary DNS domain names Kerberos realm names must be the same as the primary DNS domain names, with all letters uppercase. For example, if the domain names are ad.example.com for AD and idm.example.com for IdM, the Kerberos realm names are required to be AD.EXAMPLE.COM and IDM.EXAMPLE.COM . DNS records resolvable from all DNS domains in the trust All machines must be able to resolve DNS records from all DNS domains involved in the trust relationship: When configuring IdM DNS, follow the instructions described in the section on configuring DNS services within the IdM domain and section on managing DNS forwarding in the Linux Domain Identity, Authentication, and Policy Guide . If you are using IdM without integrated DNS, follow the instructions described in the section describing the server installation without integrated DNS in the Linux Domain Identity, Authentication, and Policy Guide . No overlap between IdM and AD DNS domains Systems joined to IdM can be distributed over multiple DNS domains. DNS domains containing IdM clients must not overlap with DNS domains containing machines joined to AD. The primary IdM DNS domain must have proper SRV records to support AD trusts. Note In some environments with trusts between IdM and Active Directory, you can install an IdM client on a host that is part of the Active Directory DNS domain. The host can then benefit from the Linux-focused features of IdM. This is not a recommended configuration and has some limitations. Red Hat recommends to always deploy IdM clients in a DNS zone different from the ones owned by Active Directory and access IdM clients through their IdM host names. You can acquire a list of the required SRV records specific to your system setup by running the USD ipa dns-update-system-records --dry-run command. The generated list can look for example like this: For other DNS domains that are part of the same IdM realm, it is not required for the SRV records to be configured when the trust to AD is configured. The reason is that AD domain controllers do not use SRV records to discover KDCs but rather base the KDC discovery on name suffix routing information for the trust. Verifying the DNS Configuration Before configuring trust, verify that the Identity Management and Active Directory servers can resolve themselves and also each other. If running the commands described below does not display the expected results, inspect the DNS configuration on the host where the commands were executed. If the host configuration seems correct, make sure that DNS delegations from the parent to child domains are set up correctly. Note that AD caches the results of DNS lookups, and changes you make in DNS are therefore sometimes not visible immediately. You can delete the current cache by running the ipconfig /flushdns command. Verify that the IdM-hosted services are resolvable from the IdM domain server used for establishing trust Run a DNS query for the Kerberos over UDP and LDAP over TCP service records. The commands are expected to list all IdM servers. Run a DNS query for the TXT record with the IdM Kerberos realm name. The obtained value is expected to match the Kerberos realm that you specified when installing IdM. After you execute the ipa-adtrust-install utility, as described in Section 5.2.2.1.1, "Preparing the IdM Server for Trust" , run a DNS query for the MS DC Kerberos over UDP and LDAP over TCP service records. The commands are expected to list all IdM servers on which ipa-adtrust-install has been executed. Note that the output is empty if ipa-adtrust-install has not been executed on any IdM server, which is typically before establishing the very first trust relationship. Verify that IdM is able to resolve service records for AD Run a DNS query for the Kerberos over UDP and LDAP over TCP service records. These commands are expected to return the names of AD domain controllers. Verify that the IdM-hosted services are resolvable from the AD server On the AD server, set the nslookup.exe utility to look up service records. Enter the domain name for the Kerberos over UDP and LDAP over TCP service records. The expected output contains the same set of IdM servers as displayed in Verify that the IdM-hosted services are resolvable from the IdM domain server used for establishing trust . Change the service type to TXT and run a DNS query for the TXT record with the IdM Kerberos realm name. The output is expected to contain the same value as displayed in Verify that the IdM-hosted services are resolvable from the IdM domain server used for establishing trust . After you execute the ipa-adtrust-install utility, as described in Section 5.2.2.1.1, "Preparing the IdM Server for Trust" , run a DNS query for the MS DC Kerberos over UDP and LDAP over TCP service records. The command is expected to list all IdM servers on which the ipa-adtrust-install utility has been executed. Note that the output is empty if ipa-adtrust-install has not been executed on any IdM server, which is typically before establishing the very first trust relationship. Verify that AD services are resolvable from the AD server On the AD server, set the nslookup.exe utility to look up service records. Enter the domain name for the Kerberos over UDP and LDAP over TCP service records. The expected output contains the same set of AD servers as displayed in Verify that IdM is able to resolve service records for AD . 5.2.1.3. NetBIOS Names The NetBIOS name is critical for identifying the Active Directory (AD) domain and, if IdM has a trust configured with AD, for identifying the IdM domain and services. As a consequence, you must use a different NetBIOS name for the IdM domain than the NetBIOS names used in the AD domains to which you want to establish the forest trust. The NetBIOS name of an Active Directory or IdM domain is usually the far-left component of the corresponding DNS domain. For example, if the DNS domain is ad.example.com , the NetBIOS name is typically AD . Note The maximum length of a NetBIOS name is 15 characters. 5.2.1.4. Firewalls and Ports To enable communication between AD domain controllers and IdM servers, make sure you meet the following port requirements: Open ports required for an AD trust and ports required by an IdM server in an AD trust on IdM servers and all AD domain controllers in both directions: from the IdM servers to the AD domain controllers and back. Open the port required by an IdM client in an AD trust on all AD domain controllers of the trusted AD forest. On the IdM clients, make sure the port is open in the outgoing direction (see Prerequisites for Installing a Client in the Linux Domain Identity, Authentication, and Policy Guide ). Table 5.2. Ports Required for an AD Trust Service Port Protocol Endpoint resolution portmapper 135 TCP NetBIOS-DGM 138 TCP and UDP NetBIOS-SSN 139 TCP and UDP Microsoft-DS 445 TCP and UDP Endpoint mapper listener range 1024-1300 TCP AD Global Catalog 3268 TCP LDAP 389 TCP [a] and UDP [a] The TCP port 389 is not required to be open on IdM servers for trust, but it is necessary for clients communicating with the IdM server. Table 5.3. Ports Required by IdM Servers in a Trust Service Port Protocol Kerberos See Port Requirements in the Linux Domain Identity, Authentication, and Policy Guide . LDAP DNS Table 5.4. Ports Required by IdM Clients in an AD Trust Service Port Protocol Notes Kerberos 88 UDP and TCP The libkrb5 library uses UDP and falls-back to the TCP protocol if the data sent from the Kerberos Distribution Center (KDC) is too large. Active Directory attaches a Privilege Attribute Certificate (PAC) to the Kerberos ticket, which increases the size and requires in most cases to use the TCP protocol. To avoid the fall-back and resending the request, by default, SSSD in Red Hat Enterprise Linux 7.4 and later uses TCP for user authentication. To configure the size before libkrb5 uses TCP, set the udp_preference_limit in the /etc/krb.5.conf file. For details, see the krb5.conf (5) man page. Additional Resources For advice on how to open the required ports, see Port Requirements in the Linux Domain Identity, Authentication, and Policy Guide . 5.2.1.5. IPv6 Settings The IdM system must have the IPv6 protocol enabled in the kernel. If IPv6 is disabled, then the CLDAP plug-in used by the IdM services fails to initialize. 5.2.1.6. Clock Settings Both the Active Directory server and the IdM server must have their clocks in sync. 5.2.1.7. Creating a Conditional Forwarder for the IdM Domain in AD Prepare the AD DNS server to forward queries for the IdM domain to the IdM DNS server: On a Windows AD domain controller, open the Active Directory (AD) DNS console. Right-click Conditional Forwarders , select New Conditional Forwarder . Enter the IdM DNS domain name and the IP address of the IdM DNS server Select Store this conditional forwarder in Active Directory, and replicate it as follows , and select the replication setting that matches your environment. Click OK . To verify that the AD domain controller (DC) can resolve DNS entries from the IdM domain, open a command prompt and enter: If the command returns the IP address of the IdM server, the conditional forwarder is working correctly. 5.2.1.8. Creating a Forward Zone for the AD Domain in IdM Prepare the IdM DNS server to forward queries for the AD domain to the AD DNS server: On the IdM server, create a forward zone entry for the AD DNS domain. For further details about creating a DNS forward zone in IdM see the Configuring Forward Zones section in the Linux Domain Identity, Authentication, and Policy Guide . If the AD DNS server does not support DNSSEC, disable DNSSEC validation on the IdM server: Edit the /etc/named.conf file and set the dnssec-validation parameter to no : Restart the named-pkcs11 service: To verify that the IdM server can resolve DNS entries from the AD domain, enter: If the command returns the IP address of the AD DC, the forward zone is working correctly. 5.2.1.9. Supported User Name Formats IdM performs user name mapping in the local SSSD client. The default output user name format for users from trusted domains supported by SSSD is user_name@domain . Active Directory supports several different kinds of name formats: user_name , user_name@DOMAIN_NAME , and DOMAIN_NAME\user_name . Users can use either only their user name ( user_name ) or their fully-qualified user name ( user_name@domain_name ), for example, to authenticate to the system. Warning Preferably, use the fully-qualified user name to avoid conflicts if the same user name exists in multiple domains. If a user specifies only the user name with out the domain, SSSD searches the account in all domains configured in the /etc/sssd/sssd.conf file and in trusted domains. If you configured a domain resolution order as described in Section 8.5.3, "Configuring the Domain Resolution Order on an IdM Client" , SSSD searches for the user in the defined order. In any case, SSSD uses the first entry found. This can lead to problems or confusion if the same user name exists in multiple domains and the first entry found is not the expected one. By default, SSSD displays user names always in the fully-qualified format. For details about changing the format, see Section 5.5, "Changing the Format of User Names Displayed by SSSD" . To identify the user name and the domain to which the user name belongs, SSSD uses a regular expression defined in the re_expression option. The regular expression is used for IdM back ends or AD back ends and supports all the mentioned formats: 5.2.2. Creating Trusts The following sections describe creating trusts in various configuration scenarios. Section 5.2.2.1, "Creating a Trust from the Command Line" contains the full procedure for configuring a trust from the command line. The other sections describe the steps which are different from this basic configuration scenario and reference the basic procedure for all other steps. Note If you set up a replica in an existing trust environment, the replica is not automatically configured as a trust controller. To configure the replica as an additional trust controller, follow the procedures in this section. After creating a trust, see Section 5.2.3, "Post-installation Considerations for Cross-forest Trusts" . 5.2.2.1. Creating a Trust from the Command Line Creating a trust relationship between the IdM and Active Directory Kerberos realms involves the following steps: Preparing the IdM server for the trust, described in Section 5.2.2.1.1, "Preparing the IdM Server for Trust" Creating a trust agreement, described in Section 5.2.2.1.2, "Creating a Trust Agreement" Verifying the Kerberos configuration, described in Section 5.2.2.1.3, "Verifying the Kerberos Configuration" 5.2.2.1.1. Preparing the IdM Server for Trust To set up the IdM server for a trust relationship with AD, follow these steps: Install the required IdM, trust, and Samba packages: Configure the IdM server to enable trust services. You can skip this step if you installed the server with the ipa-replica-install --setup-adtrust command. Run the ipa-adtrust-install utility: The utility adds DNS service records required for AD trusts. These records are created automatically if IdM was installed with an integrated DNS server. If IdM was installed without an integrated DNS server, ipa-adtrust-install prints a list of service records that you must manually add to the DNS before you can continue. Important Red Hat strongly recommends to verify the DNS configuration as described in the section called "Verifying the DNS Configuration" every time after running ipa-adtrust-install , especially if IdM or AD do not use integrated DNS servers. The script prompts to configure the slapi-nis plug-in, a compatibility plug-in that allows older Linux clients to work with trusted users. At least one user (the IdM administrator) exists when the directory is first installed. The SID generation task can create a SID for any existing users to support the trust environment. This is a resource-intensive task; for a high number of users, this can be run separately. Make sure that DNS is properly configured, as described in Section 5.2.1.2, "DNS and Realm Settings" . Start the smb service: Optionally, configure that the smb service starts automatically when the system boots: Optionally, use the smbclient utility to verify that Samba responds to Kerberos authentication from the IdM side. 5.2.2.1.2. Creating a Trust Agreement Create a trust agreement for the Active Directory domain and the IdM domain by using the ipa trust-add command: The ipa trust-add command sets up a one-way trust by default. It is not possible to establish a two-way trust in RHEL 7. To establish an external trust, pass the --external=true option to the ipa trust-add command. See Section 5.1.5, "External Trusts to Active Directory" for details. Note The ipa trust-add command configures the server as a trust controller by default. See Section 5.1.6, "Trust Controllers and Trust Agents" for details. The following example establishes a two-way trust by using the --two-way=true option: 5.2.2.1.3. Verifying the Kerberos Configuration To verify the Kerberos configuration, test if it is possible to obtain a ticket for an IdM user and if the IdM user can request service tickets. To verify a two-way trust: Request a ticket for an IdM user: Request service tickets for a service within the IdM domain: Request service tickets for a service within the AD domain: If the AD service ticket is successfully granted, there is a cross-realm ticket-granting ticket (TGT) listed with all of the other requested tickets. The TGT is named krbtgt /[email protected] . To verify a one-way trust from the IdM side: Request a ticket for an Active Directory user: Request service tickets for a service within the IdM domain: If the AD service ticket is successfully granted, there is a cross-realm ticket-granting ticket (TGT) listed with all of the other requested tickets. The TGT is named krbtgt /[email protected] . The localauth plug-in maps Kerberos principals to local SSSD user names. This allows AD users to use Kerberos authentication and access Linux services, which support GSSAPI authentication directly. Note For more information about the plug-in, see Section 5.3.7.2, "Using SSH Without Passwords" . 5.2.2.2. Creating a Trust Using a Shared Secret A shared secret is a password that is known to trusted peers and can be used by other domains to join the trust. The shared secret can configure both one-way and two-way trusts within Active Directory (AD). In AD, the shared secret is stored as a trusted domain object (TDO) within the trust configuration. IdM supports creating a one-way or two-way trust using a shared secret instead of the AD administrator credentials. Setting up such a trust requires the administrator to create the shared secret in AD and manually validate the trust on the AD side. 5.2.2.2.1. Creating a Two-Way Trust Using a Shared Secret To create a two-way trust with a shared secret with a Microsoft Windows Server 2012, 2012 R2, or 2016: Prepare the IdM server for the trust, as described in Section 5.2.2.1.1, "Preparing the IdM Server for Trust" . If the IdM and AD hosts use a DNS server that cannot resolve both domains, set up forwarding for the DNS zones: Prepare the AD DNS server to forward queries for the IdM domain to the IdM DNS server. For details, see Section 5.2.1.7, "Creating a Conditional Forwarder for the IdM Domain in AD" . Prepare the IdM DNS server to forward queries for the AD domain to the AD DNS server. For details, see Section 5.2.1.8, "Creating a Forward Zone for the AD Domain in IdM" . Configure a trust in the Active Directory Domains and Trusts console. In particular: Create a new trust. Give the trust the IdM domain name, for example idm.example.com . Specify that this is a forest type of trust. Specify that this is a two-way type of trust. Specify that this is a forest-wide authentication. Set the trust password . Note The same password must be used when configuring the trust in IdM. When asked to confirm the incoming trust, select No . Create a trust agreement, as described in Section 5.2.2.1.2, "Creating a Trust Agreement" . When running the ipa trust-add command, use the --type , --trust-secret and --two-way=True options, and omit the --admin option. For example: Retrieve the list of domains: On the IdM server, verify that the trust relationship is established by using the ipa trust-show command. Optionally, search for the trusted domain: Verify the Kerberos configuration, as described in Section 5.2.2.1.3, "Verifying the Kerberos Configuration" . 5.2.2.2.2. Creating a One-Way Trust Using a Shared Secret To create a one-way trust using a shared secret with a Microsoft Windows Server 2012, 2012 R2 or 2016: Prepare the IdM server for the trust, as described in Section 5.2.2.1.1, "Preparing the IdM Server for Trust" . If the IdM and AD hosts use a DNS server that cannot resolve both domains, set up forwarding for the DNS zones: Prepare the AD DNS server to forward queries for the IdM domain to the IdM DNS server. For details, see Section 5.2.1.7, "Creating a Conditional Forwarder for the IdM Domain in AD" . Prepare the IdM DNS server to forward queries for the AD domain to the AD DNS server. For details, see Section 5.2.1.8, "Creating a Forward Zone for the AD Domain in IdM" . Configure a trust in the Active Directory Domains and Trusts console: Right click to the domain name, and select Properties . On the Trusts tab, click New Trust . Enter the IdM domain name, and click . Select Forest trust , and click . Select One-way: incoming , and click . Select This domain only , and click . Enter a shared secret (trust password), and click . Verify the settings, and click . When the system asks if you want to confirm the incoming trust, select No, do not confirm the incoming trust , and click . Click Finish . Create a trust agreement: Enter the shared secret you set in the AD Domains and Trusts console. Validate the trust in the Active Directory Domains and Trusts console: Right click to the domain name, and select Properties . On the Trusts tab, select the domain in the Domains that trust this domain (incoming trusts) pane , and click Properties . Click the Validate button. Select Yes, validate the incoming trust , and enter the credentials of the IdM admin user. Update the list of trusted domains: List the trusted domains: Optionally, verify that the IdM server can retrieve user information from AD domain: 5.2.2.3. Verifying the ID Mapping To verify the ID mapping: Run the following command on a Windows Active Directory domain controller (DC) to list the highest ID: List the ID ranges on an IdM server: You require the first POSIX ID value in a later step. On the Active Directory DC, display the security identifier (SID) or a user. For example, to display the SID of administrator : The last part of the SID is the relative identifier (RID). You require the user's RID in the step. Note If the RID is higher than the default ID range (200000), use the ipa idrange-mod command to extend the range. For example: Display the user ID of the same user on the IdM server: If you add the first POSIX ID value (610600000) to the RID (500), it must match the user ID displayed on the IdM server (610600500). 5.2.2.4. Creating a Trust on an Existing IdM Instance When configuring a trust for an existing IdM instance, certain settings for the IdM server and entries within its domain are already configured. However, you must set the DNS configuration for the Active Directory domain and assign Active Directory SIDs to all existing IdM users and groups. Prepare the IdM server for the trust, as described in Section 5.2.2.1.1, "Preparing the IdM Server for Trust" . Create a trust agreement, as described in Section 5.2.2.1.2, "Creating a Trust Agreement" . Generate SIDs for each IdM user. Note Do not perform this step if the SIDs were generated when the ipa-adtrust-install utility was used to establish the trust. Add a new ipaNTSecurityIdentifier attribute, containing a SID, automatically for each entry by running the ipa-sidgen-task operation on the back-end LDAP directory. After the task completes successfully, a message is recorded in the error logs that the SID generation task ( Sidgen task ) finished with a status of zero (0). Verify the Kerberos configuration, as described in Section 5.2.2.1.3, "Verifying the Kerberos Configuration" . 5.2.2.5. Adding a Second Trust When adding a trust on an IdM server that already has one or more trust agreements configured, certain general IdM trust settings, such as installing the trust-related packages or configuring SIDs, is no longer required. To add an additional trust, you only must configure DNS and establish a trust agreement. Make sure that DNS is properly configured, as described in Section 5.2.1.2, "DNS and Realm Settings" . Create a trust agreement, as described in Section 5.2.2.1.2, "Creating a Trust Agreement" . 5.2.2.6. Creating a Trust in the Web UI Before creating a trust in the web UI, prepare the IdM server for the trust. This trust configuration is easiest to perform from the command line, as described in Section 5.2.2.1.1, "Preparing the IdM Server for Trust" . Once the initial configuration is set, a trust agreement can be added in the IdM web UI: Open the IdM web UI: Open the IPA Server main tab, and select the Trusts subtab. In the Trusts subtab, click Add to open the new trust configuration window. Fill in the required information about the trust: Provide the AD domain name in the Domain field. To set up the trust as two-way, select the Two-way trust check box. To set up the trust as one-way, leave Two-way trust unselected. For more information about one-way and two-way trusts, see Section 5.1.4, "One-Way and Two-Way Trusts" . To establish an external trust to a domain in another forest, select the External Trust check box. For more information, see Section 5.1.5, "External Trusts to Active Directory" . The Establish using section defines how the trust is to be established: To establish the trust using the AD administrator's user name and password, select Administrative account and provide the required credentials. Alternatively, to establish the trust with a shared password, select Pre-shared password and provide the trust password. Define the ID configuration for the trust: The Range type option allows you to choose the ID range type. If you want IdM to automatically detect what kind of ID range to use, select Detect . To define the starting ID of the ID range, use the Base ID field. To define the size of the ID range, use the Range size field. If you want IdM to use default values for the ID range, do not specify these options. For more information about ID ranges, see the section called "ID Ranges" . Figure 5.5. Adding a Trust in the Web UI Click Add to save the new trust. After this, verify the Kerberos configuration, as described in Section 5.2.2.1.3, "Verifying the Kerberos Configuration" . 5.2.3. Post-installation Considerations for Cross-forest Trusts 5.2.3.1. Potential Behavior Issues with Active Directory Trust 5.2.3.1.1. Active Directory Users and IdM Administration Currently, Active Directory (AD) users and administrators can only see their self-service page after logging into the IdM Web UI. AD administrators cannot access the administrator's view of IdM Web UI. For details, see the Authenticating to the IdM Web UI as an AD User section in the Linux Domain Identity, Authentication, and Policy Guide . Additionally, AD users currently cannot manage their own ID overrides. Only IdM users can add and manage ID overrides. 5.2.3.1.2. Authenticating Deleted Active Directory Users By default, every IdM client uses the SSSD service to cache user identities and credentials. If the IdM or AD back-end provider is temporarily unavailable, SSSD enables the local system to reference identities for users who have already logged in successfully once. Because SSSD maintains a list of users locally, changes that are made on the back end might not be immediately visible to clients that run SSSD offline. On such clients, users who have previously logged into IdM resources and whose hashed passwords are stored in the SSSD cache are able to log in again even if their user accounts have been deleted in AD. If the above conditions are met, the user identity is cached in SSSD, and the AD user is able to log into IdM resources even if the user account is deleted AD. This problem will persist until SSSD becomes online and is able to verify AD user logon against AD domain controllers. If the client system runs SSSD online, the password provided by the user is validated by an AD domain controller. This ensures that deleted AD users are not allowed to log in. 5.2.3.1.3. Credential Cache Collections and Selecting Active Directory Principals The Kerberos credentials cache attempts to match a client principal to a server principal based on the following identifiers in this order: service name host name realm name When the client and server mapping is based on the host name or real name and credential cache collections are used, unexpected behavior can occur in binding as an AD user. This is because the realm name of the Active Directory user is different than the realm name of the IdM system. If an AD user obtains a ticket using the kinit utility and then uses SSH to connect to an IdM resource, the principal is not selected for the resource ticket. an IdM principal is used because the IdM principal matches the realm name of the resource. For example, if the AD user is Administrator and the domain is ADEXAMPLE.ADREALM , the principal is [email protected] . This is set as the default principal in the Active Directory ticket cache. However, if any IdM user also has a Kerberos ticket (such as admin ), then there is a separate IdM credentials cache, with an IdM default principal. That IdM default principal is selected for a host ticket if the Active Directory user uses SSH to connect to a resource. This is because the realm name of the IdM principal matches the realm of the IdM resource. 5.2.3.1.4. Resolving Group SIDs Losing Kerberos Tickets Running a command to obtain a SID from the Samba service, such as net getlocalsid or net getdomainsid , removes any existing admin ticket from the Kerberos cache. Note You are not required to run commands such as net getlocalsid or net getdomainsid in order to use Active Directory trusts. Cannot Verify Group Membership for Users It is not possible to verify that a specific trusted user is associated with a specific IdM group, external or POSIX. Cannot Display Remote Active Directory Group Memberships for an Active Directory User Important Note that this problem no longer occurs if the IdM server and client run on Red Hat Enterprise Linux 7.1 or later. The id utility can be used to display local group associations for Linux system users. However, id does not display Active Directory group memberships for Active Directory users, even though Samba tools do display them. To work around this, you can use the ssh utility to log into an IdM client machine as the given AD user. After the AD user logs in successfully for the first time, the id search detects and displays the AD group memberships: 5.2.3.2. Configuring Trust Agents After you set up a new replica in a trust environment, the replica does not automatically have the AD trust agent role installed. To configure the replica as a trust agent: On an existing trust controller, run the ipa-adtrust-install --add-agents command: The command starts an interactive configuration session and prompts you for the information required to set up the agent. For further information about the --add-agents option, see the ipa-adtrust-install (1) man page. On the new replica: Restart the IdM service: Remove all entries from the SSSD cache: Note To use the sssctl command, the sssd-tools package must be installed. Optionally, verify that the replica has the AD trust agent role installed:
[ "ipa dns-update-system-records --dry-run IPA DNS records: _kerberos-master._tcp.example.com. 86400 IN SRV 0 100 88 server.example.com. _kerberos-master._udp.example.com. 86400 IN SRV 0 100 88 server.example.com. _kerberos._tcp.example.com. 86400 IN SRV 0 100 88 server.example.com. _kerberos._udp.example.com. 86400 IN SRV 0 100 88 server.example.com. _kerberos.example.com. 86400 IN TXT \"EXAMPLE.COM\" _kpasswd._tcp.example.com. 86400 IN SRV 0 100 464 server.example.com. _kpasswd._udp.example.com. 86400 IN SRV 0 100 464 server.example.com. _ldap._tcp.example.com. 86400 IN SRV 0 100 389 server.example.com. _ntp._udp.example.com. 86400 IN SRV 0 100 123 server.example.com.", "dig +short -t SRV _kerberos._udp. ipa.example.com. 0 100 88 ipamaster1.ipa.example.com. dig +short -t SRV _ldap._tcp. ipa.example.com. 0 100 389 ipamaster1.ipa.example.com.", "dig +short -t TXT _kerberos. ipa.example.com. IPA.EXAMPLE.COM", "dig +short -t SRV _kerberos._udp.dc._msdcs. ipa.example.com. 0 100 88 ipamaster1.ipa.example.com. dig +short -t SRV _ldap._tcp.dc._msdcs. ipa.example.com. 0 100 389 ipamaster1.ipa.example.com.", "dig +short -t SRV _kerberos._udp.dc._msdcs. ad.example.com. 0 100 88 addc1.ad.example.com. dig +short -t SRV _ldap._tcp.dc._msdcs. ad.example.com. 0 100 389 addc1.ad.example.com.", "C:\\>nslookup.exe > set type=SRV", "> _kerberos._udp.ipa.example.com. _kerberos._udp.ipa.example.com. SRV service location: priority = 0 weight = 100 port = 88 svr hostname = ipamaster1.ipa.example.com > _ldap._tcp.ipa.example.com _ldap._tcp.ipa.example.com SRV service location: priority = 0 weight = 100 port = 389 svr hostname = ipamaster1.ipa.example.com", "C:\\>nslookup.exe > set type=TXT > _kerberos.ipa.example.com. _kerberos.ipa.example.com. text = \"IPA.EXAMPLE.COM\"", "C:\\>nslookup.exe > set type=SRV > _kerberos._udp.dc._msdcs.ipa.example.com. _kerberos._udp.dc._msdcs.ipa.example.com. SRV service location: priority = 0 weight = 100 port = 88 svr hostname = ipamaster1.ipa.example.com > _ldap._tcp.dc._msdcs.ipa.example.com. _ldap._tcp.dc._msdcs.ipa.example.com. SRV service location: priority = 0 weight = 100 port = 389 svr hostname = ipamaster1.ipa.example.com", "C:\\>nslookup.exe > set type=SRV", "> _kerberos._udp.dc._msdcs.ad.example.com. _kerberos._udp.dc._msdcs.ad.example.com. SRV service location: priority = 0 weight = 100 port = 88 svr hostname = addc1.ad.example.com > _ldap._tcp.dc._msdcs.ad.example.com. _ldap._tcp.dc._msdcs.ad.example.com. SRV service location: priority = 0 weight = 100 port = 389 svr hostname = addc1.ad.example.com", "C:\\> nslookup server.idm.example.com", "dnssec-validation no;", "systemctl restart named-pkcs11", "host server.ad.example.com", "re_expression = (((?P<domain>[^\\\\]+)\\\\(?P<name>.+USD))|((?P<name>[^@]+)@(?P<domain>.+USD))|(^(?P<name>[^@\\\\]+)USD))", "yum install ipa-server ipa-server-trust-ad samba-client", "ipa-adtrust-install", "Do you want to enable support for trusted domains in Schema Compatibility plugin? This will allow clients older than SSSD 1.9 and non-Linux clients to work with trusted users. Enable trusted domains support in slapi-nis? [no]: y", "Do you want to run the ipa-sidgen task? [no]: yes", "systemctl start smb", "systemctl enable smb", "smbclient -L ipaserver.ipa.example.com -k lp_load_ex: changing to config backend registry Sharename Type Comment --------- ---- ------- IPCUSD IPC IPC Service (Samba 4.9.1) Reconnecting with SMB1 for workgroup listing. Server Comment --------- ------- Workgroup Master --------- -------", "ipa trust-add --type= type ad_domain_name --admin ad_admin_username --password", "ipa trust-add --type=ad ad.example.com --admin Administrator --password --two-way=true Active Directory domain administrator's password: ------------------------------------------------------- Added Active Directory trust for realm \"ad.example.com\" ------------------------------------------------------- Realm-Name: ad.example.com Domain NetBIOS name: AD Domain Security Identifier: S-1-5-21-796215754-1239681026-23416912 SID blacklist incoming: S-1-5-20, S-1-5-3, S-1-5-2, S-1-5-1, S-1-5-7, S-1-5-6, S-1-5-5, S-1-5-4, S-1-5-9, S-1-5-8, S-1-5-17, S-1-5-16, S-1-5-15, S-1-5-14, S-1-5-13, S-1-5-12, S-1-5-11, S-1-5-10, S-1-3, S-1-2, S-1-1, S-1-0, S-1-5-19, S-1-5-18 SID blacklist outgoing: S-1-5-20, S-1-5-3, S-1-5-2, S-1-5-1, S-1-5-7, S-1-5-6, S-1-5-5, S-1-5-4, S-1-5-9, S-1-5-8, S-1-5-17, S-1-5-16, S-1-5-15, S-1-5-14, S-1-5-13, S-1-5-12, S-1-5-11, S-1-5-10, S-1-3, S-1-2, S-1-1, S-1-0, S-1-5-19, S-1-5-18 Trust direction: Two-way trust Trust type: Active Directory domain Trust status: Established and verified", "kinit user", "kvno -S host ipaserver.example.com", "kvno -S cifs adserver.example.com", "klist Ticket cache: FILE:/tmp/krb5cc_0 Default principal: [email protected] Valid starting Expires Service principal 06/15/12 12:13:04 06/16/12 12:12:55 krbtgt/[email protected] 06/15/12 12:13:13 06/16/12 12:12:55 host/[email protected] 06/15/12 12:13:23 06/16/12 12:12:55 krbtgt/[email protected] 06/15/12 12:14:58 06/15/12 22:14:58 cifs/[email protected]", "kinit user @ AD.DOMAIN", "kvno -S host ipaserver.example.com", "klist Ticket cache: KEYRING:persistent:0:krb_ccache_hRtox00 Default principal: [email protected] Valid starting Expires Service principal 03.05.2016 18:31:06 04.05.2016 04:31:01 host/[email protected] renew until 04.05.2016 18:31:00 03.05.2016 18:31:06 04.05.2016 04:31:01 krbtgt/[email protected] renew until 04.05.2016 18:31:00 03.05.2016 18:31:01 04.05.2016 04:31:01 krbtgt/[email protected] renew until 04.05.2016 18:31:00", "ipa trust-add --type=ad ad.example.com --trust-secret --two-way=True Shared secret for the trust: ------------------------------------------------------- Added Active Directory trust for realm \"ad.example.com\" ------------------------------------------------------- Realm-Name: ad.example.com Domain NetBIOS name: AD Domain Security Identifier: S-1-5-21-796215754-1239681026-23416912 SID blacklist incoming: S-1-5-20, S-1-5-3, S-1-5-2, S-1-5-1, S-1-5-7, S-1-5-6, S-1-5-5, S-1-5-4, S-1-5-9, S-1-5-8, S-1-5-17, S-1-5-16, S-1-5-15, S-1-5-14, S-1-5-13, S-1-5-12, S-1-5-11, S-1-5-10, S-1-3, S-1-2, S-1-1, S-1-0, S-1-5-19, S-1-5-18 SID blacklist outgoing: S-1-5-20, S-1-5-3, S-1-5-2, S-1-5-1, S-1-5-7, S-1-5-6, S-1-5-5, S-1-5-4, S-1-5-9, S-1-5-8, S-1-5-17, S-1-5-16, S-1-5-15, S-1-5-14, S-1-5-13, S-1-5-12, S-1-5-11, S-1-5-10, S-1-3, S-1-2, S-1-1, S-1-0, S-1-5-19, S-1-5-18 Trust direction: Trusting forest Trust type: Active Directory domain Trust status: Waiting for confirmation by remote side", "ipa trust-fetch-domains ad_domain", "ipa trust-show ad.example.com Domain NetBIOS name: AD Domain Security Identifier: S-1-5-21-796215754-1239681026-23416912 Trust direction: Trusting forest Trust type: Active Directory domain", "ipa trustdomain-find ad.example.com Domain name: ad.example.com Domain NetBIOS name: AD Domain Security Identifier: S-1-5-21-796215754-1239681026-23416912 Domain enabled: True", "ipa trust-add --type=ad --trust-secret ad.example.com Shared secret for the trust: password ------------------------------------------------------- Added Active Directory trust for realm \" ad.example.com \" ------------------------------------------------------- Realm name: ad.example.com Domain NetBIOS name: AD Domain Security Identifier: S-1-5-21-1762709870-351891212-3141221786 Trust direction: Trusting forest Trust type: Active Directory domain Trust status: Waiting for confirmation by remote side", "ipa trust-fetch-domains ad.example.com ---------------------------------------------------------------------------------------- List of trust domains successfully refreshed. Use trustdomain-find command to list them. ---------------------------------------------------------------------------------------- ---------------------------- Number of entries returned 0 ----------------------------", "ipa trustdomain-find ad.example.com Domain name: ad.example.com Domain NetBIOS name: AD Domain Security Identifier: S-1-5-21-1762709870-351891212-3141221786 Domain enabled: True ---------------------------- Number of entries returned 1 ----------------------------", "getent passwd administrator @ ad.example.com administrator @ ad.example.com :*: 610600500 : 610600500 : Administrator : /home/ad.example.com/administrator :", "C:\\> dcdiag /v /test:ridmanager /s:ad.example.com Available RID Pool for the Domain is 1600 to 1073741823", "ipa idrange-find ---------------- 1 range matched ---------------- Range name: AD.EXAMPLE.COM_id_range First Posix ID of the range: 610600000 Number of IDs in the range: 200000 First RID of the corresponding RID range: 0 Domain SID of the trusted domain: S-1-5-21-796215754-1239681026-23416912 Range type: Active Directory domain range ---------------------------- Number of entries returned 1 ----------------------------", "C:\\> wmic useraccount where name=\"administrator\" get sid S-1-5-21-796215754-1239681026-23416912- 500", "ipa idrange-mod --range-size= 1000000 AD.EXAMPLE.COM_id_range", "id ad\\\\administrator uid= 610600500 ([email protected])", "ldapmodify -x -H ldap:// ipaserver.ipa.example.com :389 -D \"cn=directory manager\" -w password dn: cn=sidgen,cn=ipa-sidgen-task,cn=tasks,cn=config changetype: add objectClass: top objectClass: extensibleObject cn: sidgen nsslapd-basedn: dc=ipadomain,dc=com delay: 0 adding new entry \"cn=sidgen,cn=ipa-sidgen-task,cn=tasks,cn=config\"", "grep \"sidgen_task_thread\" /var/log/dirsrv/slapd-IDM-EXAMPLE-COM/errors [20/Jul/2012:18:17:16 +051800] sidgen_task_thread - [file ipa_sidgen_task.c, line 191]: Sidgen task starts [20/Jul/2012:18:17:16 +051800] sidgen_task_thread - [file ipa_sidgen_task.c, line 196]: Sidgen task finished [0].", "https:// ipaserver.example.com", "kinit [email protected] Password for [email protected]: klist Ticket cache: KEYRING:persistent:0:0 Default principal: [email protected] Valid starting Expires Service principal 27.11.2015 11:25:23 27.11.2015 21:25:23 krbtgt/[email protected] renew until 28.11.2015 11:25:16", "ssh -l [email protected] ipaclient.example.com [email protected]@ipaclient.example.com's password: klist -A Ticket cache: KEYRING:persistent:0:0 Default principal: [email protected] Valid starting Expires Service principal 27.11.2015 11:25:23 27.11.2015 21:25:23 krbtgt/[email protected] renew until 28.11.2015 11:25:16 Ticket cache: KEYRING:persistent:0:0 Default principal: [email protected] >>>>> IdM user Valid starting Expires Service principal 27.11.2015 11:25:18 28.11.2015 11:25:16 krbtgt/[email protected] 27.11.2015 11:25:48 28.11.2015 11:25:16 host/[email protected] >>>>> host principal", "id ADDOMAIN\\user uid=1921801107([email protected]) gid=1921801107([email protected]) groups=1921801107([email protected]),129600004(ad_users),1921800513(domain [email protected])", "ipa-adtrust-install --add-agents", "ipactl restart", "sssctl cache-remove", "ipa server-show new_replica.idm.example.com Enabled server roles: CA server, NTP server, AD trust agent" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/windows_integration_guide/trust-during
Chapter 1. Cluster lifecycle with multicluster engine operator overview
Chapter 1. Cluster lifecycle with multicluster engine operator overview The multicluster engine operator is the cluster lifecycle operator that provides cluster management capabilities for OpenShift Container Platform and Red Hat Advanced Cluster Management hub clusters. From the hub cluster, you can create and manage clusters, as well as destroy any clusters that you created. You can also hibernate, resume, and detach clusters. The multicluster engine operator is the cluster lifecycle operator that provides cluster management capabilities for Red Hat OpenShift Container Platform and Red Hat Advanced Cluster Management hub clusters. If you installed Red Hat Advanced Cluster Management, you do not need to install multicluster engine operator, as it is automatically installed. Information: Your cluster is created by using the OpenShift Container Platform cluster installer with the Hive resource. You can find more information about the process of installing OpenShift Container Platform clusters at Installing and configuring OpenShift Container Platform clusters in the OpenShift Container Platform documentation. With your OpenShift Container Platform cluster, you can use multicluster engine operator as a standalone cluster manager for cluster lifecycle function, or you can use it as part of a Red Hat Advanced Cluster Management hub cluster. If you are using OpenShift Container Platform only, the operator is included with subscription. Visit About multicluster engine for Kubernetes operator from the OpenShift Container Platform documentation. If you subscribe to Red Hat Advanced Cluster Management, you also receive the operator with installation. You can create, manage, and monitor other Kubernetes clusters with the Red Hat Advanced Cluster Management hub cluster. Release images are the version of OpenShift Container Platform that you use when you create a cluster. For clusters that are created using Red Hat Advanced Cluster Management, you can enable automatic upgrading of your release images. For more information about release images in Red Hat Advanced Cluster Management, see Release images . With hosted control planes for OpenShift Container Platform, you can create control planes as pods on a hosting cluster without the need for dedicated physical machines for each control plane. See the Hosted control planes overview in the OpenShift Container Platform documentation. Important If you are using multicluster engine operator 2.6 and earlier, the hosted control planes documentation is located in the Red Hat Advanced Cluster Management product documentation. See Red Hat Advanced Cluster Management Hosted control planes . Cluster lifecycle architecture Release notes for Cluster lifecycle with multicluster engine operator Installing and upgrading multicluster engine operator Console overview multicluster engine for Kubernetes operator Role-based access control Network configuration Managing credentials Cluster lifecycle introduction Release images Discovery service introduction APIs Troubleshooting 1.1. Console overview OpenShift Container Platform console plug-ins are available with the OpenShift Container Platform web console and can be integrated. To use this feature, the console plug-ins must remain enabled. The multicluster engine operator displays certain console features from Infrastructure and Credentials navigation items. If you install Red Hat Advanced Cluster Management, you see more console capability. Note: With the plug-ins enabled, you can access Red Hat Advanced Cluster Management within the OpenShift Container Platform console from the cluster switcher by selecting All Clusters from the drop-down menu. To disable the plug-in, be sure you are in the Administrator perspective in the OpenShift Container Platform console. Find Administration in the navigation and click Cluster Settings , then click Configuration tab. From the list of Configuration resources , click the Console resource with the operator.openshift.io API group, which contains cluster-wide configuration for the web console. Click on the Console plug-ins tab. The mce plug-in is listed. Note: If Red Hat Advanced Cluster Management is installed, it is also listed as acm . Modify plug-in status from the table. In a few moments, you are prompted to refresh the console. 1.2. multicluster engine operator role-based access control RBAC is validated at the console level and at the API level. Actions in the console can be enabled or disabled based on user access role permissions. View the following sections for more information on RBAC for specific lifecycles in the product: Overview of roles Cluster lifecycle RBAC Cluster pools RBAC Console and API RBAC table for cluster lifecycle Credentials role-based access control 1.2.1. Overview of roles Some product resources are cluster-wide and some are namespace-scoped. You must apply cluster role bindings and namespace role bindings to your users for consistent access controls. View the table list of the following role definitions that are supported: 1.2.1.1. Table of role definition Role Definition cluster-admin This is an OpenShift Container Platform default role. A user with cluster binding to the cluster-admin role is an OpenShift Container Platform super user, who has all access. open-cluster-management:cluster-manager-admin A user with cluster binding to the open-cluster-management:cluster-manager-admin role is a super user, who has all access. This role allows the user to create a ManagedCluster resource. open-cluster-management:admin:<managed_cluster_name> A user with cluster binding to the open-cluster-management:admin:<managed_cluster_name> role has administrator access to the ManagedCluster resource named, <managed_cluster_name> . When a user has a managed cluster, this role is automatically created. open-cluster-management:view:<managed_cluster_name> A user with cluster binding to the open-cluster-management:view:<managed_cluster_name> role has view access to the ManagedCluster resource named, <managed_cluster_name> . open-cluster-management:managedclusterset:admin:<managed_clusterset_name> A user with cluster binding to the open-cluster-management:managedclusterset:admin:<managed_clusterset_name> role has administrator access to ManagedCluster resource named <managed_clusterset_name> . The user also has administrator access to managedcluster.cluster.open-cluster-management.io , clusterclaim.hive.openshift.io , clusterdeployment.hive.openshift.io , and clusterpool.hive.openshift.io resources, which has the managed cluster set labels: cluster.open-cluster-management.io and clusterset=<managed_clusterset_name> . A role binding is automatically generated when you are using a cluster set. See Creating a ManagedClusterSet to learn how to manage the resource. open-cluster-management:managedclusterset:view:<managed_clusterset_name> A user with cluster binding to the open-cluster-management:managedclusterset:view:<managed_clusterset_name> role has view access to the ManagedCluster resource named, <managed_clusterset_name>`. The user also has view access to managedcluster.cluster.open-cluster-management.io , clusterclaim.hive.openshift.io , clusterdeployment.hive.openshift.io , and clusterpool.hive.openshift.io resources, which has the managed cluster set labels: cluster.open-cluster-management.io , clusterset=<managed_clusterset_name> . For more details on how to manage managed cluster set resources, see Creating a ManagedClusterSet . admin, edit, view Admin, edit, and view are OpenShift Container Platform default roles. A user with a namespace-scoped binding to these roles has access to open-cluster-management resources in a specific namespace, while cluster-wide binding to the same roles gives access to all of the open-cluster-management resources cluster-wide. Important : Any user can create projects from OpenShift Container Platform, which gives administrator role permissions for the namespace. If a user does not have role access to a cluster, the cluster name is not visible. The cluster name is displayed with the following symbol: - . RBAC is validated at the console level and at the API level. Actions in the console can be enabled or disabled based on user access role permissions. View the following sections for more information on RBAC for specific lifecycles in the product. 1.2.2. Cluster lifecycle RBAC View the following cluster lifecycle RBAC operations: Create and administer cluster role bindings for all managed clusters. For example, create a cluster role binding to the cluster role open-cluster-management:cluster-manager-admin by entering the following command: This role is a super user, which has access to all resources and actions. You can create cluster-scoped managedcluster resources, the namespace for the resources that manage the managed cluster, and the resources in the namespace with this role. You might need to add the username of the ID that requires the role association to avoid permission errors. Run the following command to administer a cluster role binding for a managed cluster named cluster-name : This role has read and write access to the cluster-scoped managedcluster resource. This is needed because the managedcluster is a cluster-scoped resource and not a namespace-scoped resource. Create a namespace role binding to the cluster role admin by entering the following command: This role has read and write access to the resources in the namespace of the managed cluster. Create a cluster role binding for the open-cluster-management:view:<cluster-name> cluster role to view a managed cluster named cluster-name Enter the following command: This role has read access to the cluster-scoped managedcluster resource. This is needed because the managedcluster is a cluster-scoped resource. Create a namespace role binding to the cluster role view by entering the following command: This role has read-only access to the resources in the namespace of the managed cluster. View a list of the managed clusters that you can access by entering the following command: This command is used by administrators and users without cluster administrator privileges. View a list of the managed cluster sets that you can access by entering the following command: This command is used by administrators and users without cluster administrator privileges. 1.2.2.1. Cluster pools RBAC View the following cluster pool RBAC operations: As a cluster administrator, use cluster pool provision clusters by creating a managed cluster set and grant administrator permission to roles by adding the role to the group. View the following examples: Grant admin permission to the server-foundation-clusterset managed cluster set with the following command: Grant view permission to the server-foundation-clusterset managed cluster set with the following command: Create a namespace for the cluster pool, server-foundation-clusterpool . View the following examples to grant role permissions: Grant admin permission to server-foundation-clusterpool for the server-foundation-team-admin by running the following commands: As a team administrator, create a cluster pool named ocp46-aws-clusterpool with a cluster set label, cluster.open-cluster-management.io/clusterset=server-foundation-clusterset in the cluster pool namespace: The server-foundation-webhook checks if the cluster pool has the cluster set label, and if the user has permission to create cluster pools in the cluster set. The server-foundation-controller grants view permission to the server-foundation-clusterpool namespace for server-foundation-team-user . When a cluster pool is created, the cluster pool creates a clusterdeployment . Continue reading for more details: The server-foundation-controller grants admin permission to the clusterdeployment namespace for server-foundation-team-admin . The server-foundation-controller grants view permission clusterdeployment namespace for server-foundation-team-user . Note: As a team-admin and team-user , you have admin permission to the clusterpool , clusterdeployment , and clusterclaim . 1.2.2.2. Console and API RBAC table for cluster lifecycle View the following console and API RBAC tables for cluster lifecycle: Table 1.1. Console RBAC table for cluster lifecycle Resource Admin Edit View Clusters read, update, delete - read Cluster sets get, update, bind, join edit role not mentioned get Managed clusters read, update, delete no edit role mentioned get Provider connections create, read, update, and delete - read Table 1.2. API RBAC table for cluster lifecycle API Admin Edit View managedclusters.cluster.open-cluster-management.io You can use mcl (singular) or mcls (plural) in commands for this API. create, read, update, delete read, update read managedclusters.view.open-cluster-management.io You can use mcv (singular) or mcvs (plural) in commands for this API. read read read managedclusters.register.open-cluster-management.io/accept update update managedclusterset.cluster.open-cluster-management.io You can use mclset (singular) or mclsets (plural) in commands for this API. create, read, update, delete read, update read managedclustersets.view.open-cluster-management.io read read read managedclustersetbinding.cluster.open-cluster-management.io You can use mclsetbinding (singular) or mclsetbindings (plural) in commands for this API. create, read, update, delete read, update read klusterletaddonconfigs.agent.open-cluster-management.io create, read, update, delete read, update read managedclusteractions.action.open-cluster-management.io create, read, update, delete read, update read managedclusterviews.view.open-cluster-management.io create, read, update, delete read, update read managedclusterinfos.internal.open-cluster-management.io create, read, update, delete read, update read manifestworks.work.open-cluster-management.io create, read, update, delete read, update read submarinerconfigs.submarineraddon.open-cluster-management.io create, read, update, delete read, update read placements.cluster.open-cluster-management.io create, read, update, delete read, update read 1.2.2.3. Credentials role-based access control The access to credentials is controlled by Kubernetes. Credentials are stored and secured as Kubernetes secrets. The following permissions apply to accessing secrets in Red Hat Advanced Cluster Management for Kubernetes: Users with access to create secrets in a namespace can create credentials. Users with access to read secrets in a namespace can also view credentials. Users with the Kubernetes cluster roles of admin and edit can create and edit secrets. Users with the Kubernetes cluster role of view cannot view secrets because reading the contents of secrets enables access to service account credentials. 1.3. Network configuration Configure your network settings to allow the connections. Important: The trusted CA bundle is available in the multicluster engine operator namespace, but that enhancement requires changes to your network. The trusted CA bundle ConfigMap uses the default name of trusted-ca-bundle . You can change this name by providing it to the operator in an environment variable named TRUSTED_CA_BUNDLE . See Configuring the cluster-wide proxy in the Networking section of Red Hat OpenShift Container Platform for more information. Note: Registration Agent and Work Agent on the managed cluster do not support proxy settings because they communicate with apiserver on the hub cluster by establishing an mTLS connection, which cannot pass through the proxy. For the multicluster engine operator cluster networking requirements, see the following table: Direction Protocol Connection Port (if specified) Outbound Kubernetes API server of the provisioned managed cluster 6443 Outbound from the OpenShift Container Platform managed cluster to the hub cluster TCP Communication between the Ironic Python Agent and the bare metal operator on the hub cluster 6180, 6183, 6385, and 5050 Outbound from the hub cluster to the Ironic Python Agent on the managed cluster TCP Communication between the bare metal node where the Ironic Python Agent is running and the Ironic conductor service 9999 Outbound and inbound The WorkManager service route on the managed cluster 443 Inbound The Kubernetes API server of the multicluster engine for Kubernetes operator cluster from the managed cluster 6443 Note: The managed cluster must be able to reach the hub cluster control plane node IP addresses. 1.4. Release notes for Cluster lifecycle with multicluster engine operator Learn about new features and enhancements, support, deprecations, removals, and Errata bug fixes. What's new for Cluster lifecycle with multicluster engine operator Errata updates for Cluster lifecycle with multicluster engine operator Known issues and limitations for Cluster lifecycle with multicluster engine operator Deprecations and removals for Cluster lifecycle with multicluster engine operator Important: OpenShift Container Platform release notes are not documented in this product documentation. For your OpenShift Container Platform cluster, see OpenShift Container Platform release notes . git Deprecated: multicluster engine operator 2.3 and earlier versions are no longer supported. The documentation might remain available, but without any Errata or other updates. Best practice: Upgrade to the most recent version. The documentation references the earliest supported OpenShift Container Platform version, unless a specific component or function is introduced and tested only on a more recent version of OpenShift Container Platform. For full support information, see the multicluster engine operator Support matrix . For lifecycle information, see Red Hat OpenShift Container Platform Life Cycle policy . If you experience issues with one of the currently supported releases, or the product documentation, go to Red Hat Support where you can troubleshoot, view Knowledgebase articles, connect with the Support Team, or open a case. You must log in with your credentials. You can also learn more about the Customer Portal documentation at Red Hat Customer Portal FAQ . 1.4.1. What's new for Cluster lifecycle with multicluster engine operator Learn about new features for creating, importing, managing, and destroying Kubernetes clusters across various infrastructure cloud providers, private clouds, and on-premises data centers. For full support information, see the multicluster engine operator Support matrix . For lifecycle information, see Red Hat OpenShift Container Platform Life Cycle policy . Important: Cluster management now supports all providers that are certified through the Cloud Native Computing Foundation (CNCF) Kubernetes Conformance Program. Choose a vendor that is recognized by CNFC for your hybrid cloud multicluster management. See the following information about using CNFC providers: Learn how CNFC providers are certified at Certified Kubernetes Conformance . For Red Hat support information about CNFC third-party providers, see Red Hat support with third party components , or Contact Red Hat support . If you bring your own CNFC conformance certified cluster, you need to change the OpenShift Container Platform CLI oc command to the Kubernetes CLI command, kubectl . 1.4.1.1. New features and enhancements for components Learn more about new features for specific components. Note: Some features and components are identified and released as Technology Preview . Important: The hosted control planes documentation is now located in the OpenShift Container Platform documentation. See the Hosted control planes overview in the OpenShift Container Platform documentation. If you are using multicluster engine operator 2.6 and earlier, the hosted control planes documentation is located in the Red Hat Advanced Cluster Management product documentation. See Red Hat Advanced Cluster Management Hosted control planes . 1.4.1.2. Cluster management Learn about new features and enhancements for Cluster lifecycle with multicluster engine operator. You can now set a duration to choose when the kubeconfig bootstrap in the klusterlet manifest expires. To learn more, see Importing a cluster . You can now import all cluster resources and continue using them after moving a managed cluster that was installed by the Assisted Installer from one hub cluster to another hub cluster. To learn more, see Importing cluster resources . You can now connect to OpenShift Cluster Manager with Service Account credentials. To learn more, see Creating a credential for Red Hat OpenShift Cluster Manager . You can now specify the CA bundle when importing a managed cluster. To learn more, see Customizing the server URL and CA bundle of the hub cluster API server when importing a managed cluster (Technology Preview) . You can now manually configure a hub cluster KubeAPIServer verification strategy. To learn more, see Configuring the hub cluster KubeAPIServer verification strategy 1.4.2. Errata updates for Cluster lifecycle with multicluster engine operator For multicluster engine operator, the Errata updates are automatically applied when released. If no release notes are listed, the product does not have an Errata release at this time. Important: For reference, Jira links and Jira numbers might be added to the content and used internally. Links that require access might not be available for the user. 1.4.2.1. Errata 2.7.3 Delivers updates to one or more product container images. 1.4.2.2. Errata 2.7.2 Delivers updates to one or more product container images. Fixes an error with the Clear all filters button. ( ACM-15277 ) Stops the Detach clusters action from deleting hosted clusters. ( ACM-15018 ) Prevents the managed clusters from being displayed in the Discovery tab in the console after updating valid OpenShift Cluster Manager credentials to invalid ones. ( ACM-15010 ) Keeps the cluster-proxy-addon from getting stuck in the Progressing state. ( ACM-14863 ) 1.4.2.3. Errata 2.7.1 Delivers updates to one or more product container images. 1.4.3. Known issues and limitations for Cluster lifecycle with multicluster engine operator Review the known issues and limitations for Cluster lifecycle with multicluster engine operator for this release, or known issues that continued from the release. Cluster management known issues and limitations are part of the Cluster lifecycle with multicluster engine operator documentation. Known issues for multicluster engine operator integrated with Red Hat Advanced Cluster Management are documented in the Release notes for Red Hat Advanced Cluster Management . Important: OpenShift Container Platform release notes are not documented in this product documentation. For your OpenShift Container Platform cluster, see OpenShift Container Platform release notes . Installation Cluster management Central infrastructure management 1.4.3.1. Installation Learn about known issues and limitations during multicluster engine operator installation. 1.4.3.1.1. Status stuck when installing on OpenShift Service on AWS with hosted control plane cluster Installation status might get stuck in the Installing state when you install multicluster engine operator on a OpenShift Service on AWS with hosted control planes cluster. The local-cluster might also remain in the Unknown state. When you check the klusterlet-agent pod log in the open-cluster-management-agent namespace on your hub cluster, you see an error that resembles the following: E0809 18:45:29.450874 1 reflector.go:147] k8s.io/[email protected]/tools/cache/reflector.go:229: Failed to watch *v1.CertificateSigningRequest: failed to list *v1.CertificateSigningRequest: Get "https://api.xxx.openshiftapps.com:443/apis/certificates.k8s.io/v1/certificatesigningrequests?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate signed by unknown authority To resolve the problem, configure the hub cluster API server verification strategy. Complete the following steps: Create a KlusterletConfig resource with name global if it does not exist. Set the spec.hubKubeAPIServerConfig.serverVerificationStrategy to UseSystemTruststore . See the following example: apiVersion: config.open-cluster-management.io/v1alpha1 kind: KlusterletConfig metadata: name: global spec: hubKubeAPIServerConfig: serverVerificationStrategy: UseSystemTruststore Apply the resource by running the following command on the hub cluster. Replace <filename> with the name of your file: oc apply -f <filename> If the local-cluster state does not recover in one minute, export and decode the import.yaml file by running the following command on the hub cluster: oc get secret local-cluster-import -n local-cluster -o jsonpath={.data.import\.yaml} | base64 --decode > import.yaml Apply the file by running the following command on the hub cluster: oc apply -f import.yaml 1.4.3.1.2. installNamespace field can only have one value When enabling the managed-serviceaccount add-on, the installNamespace field in the ManagedClusterAddOn resource must have open-cluster-management-agent-addon as the value. Other values are ignored. The managed-serviceaccount add-on agent is always deployed in the open-cluster-management-agent-addon namespace on the managed cluster. 1.4.3.2. Cluster Learn about known issues and limitations for Cluster lifecycle with multicluster engine operator, such as issues with creating, discovering, importing, and removing clusters, and more cluster management issues for multicluster engine operator. 1.4.3.2.1. Limitation with nmstate Develop quicker by configuring copy and paste features. To configure the copy-from-mac feature in the assisted-installer , you must add the mac-address to the nmstate definition interface and the mac-mapping interface. The mac-mapping interface is provided outside the nmstate definition interface. As a result, you must provide the same mac-address twice. If you have a different version of VolSync installed, replace v0.6.0 with your installed version. 1.4.3.2.2. Deleting a managed cluster set does not automatically remove its label After you delete a ManagedClusterSet , the label that is added to each managed cluster that associates the cluster to the cluster set is not automatically removed. Manually remove the label from each of the managed clusters that were included in the deleted managed cluster set. The label resembles the following example: cluster.open-cluster-management.io/clusterset:<ManagedClusterSet Name> . 1.4.3.2.3. ClusterClaim error If you create a Hive ClusterClaim against a ClusterPool and manually set the ClusterClaimspec lifetime field to an invalid golang time value, the product stops fulfilling and reconciling all ClusterClaims , not just the malformed claim. You see the following error in the clusterclaim-controller pod logs, which is a specific example with the PoolName and invalid lifetime included: You can delete the invalid claim. If the malformed claim is deleted, claims begin successfully reconciling again without any further interaction. 1.4.3.2.4. The product channel out of sync with provisioned cluster The clusterimageset is in fast channel, but the provisioned cluster is in stable channel. Currently the product does not sync the channel to the provisioned OpenShift Container Platform cluster. Change to the right channel in the OpenShift Container Platform console. Click Administration > Cluster Settings > Details Channel . 1.4.3.2.5. Selecting a subnet is required when creating an on-premises cluster When you create an on-premises cluster using the console, you must select an available subnet for your cluster. It is not marked as a required field. 1.4.3.2.6. Cluster provision with Ansible automation fails in proxy environment An Automation template that is configured to automatically provision a managed cluster might fail when both of the following conditions are met: The hub cluster has cluster-wide proxy enabled. The Ansible Automation Platform can only be reached through the proxy. 1.4.3.2.7. Cannot delete managed cluster namespace manually You cannot delete the namespace of a managed cluster manually. The managed cluster namespace is automatically deleted after the managed cluster is detached. If you delete the managed cluster namespace manually before the managed cluster is detached, the managed cluster shows a continuous terminating status after you delete the managed cluster. To delete this terminating managed cluster, manually remove the finalizers from the managed cluster that you detached. 1.4.3.2.8. Automatic secret updates for provisioned clusters is not supported When you change your cloud provider access key on the cloud provider side, you also need to update the corresponding credential for this cloud provider on the console of multicluster engine operator. This is required when your credentials expire on the cloud provider where the managed cluster is hosted and you try to delete the managed cluster. 1.4.3.2.9. Process to destroy a cluster does not complete When you destroy a managed cluster, the status continues to display Destroying after one hour, and the cluster is not destroyed. To resolve this issue complete the following steps: Manually ensure that there are no orphaned resources on your cloud, and that all of the provider resources that are associated with the managed cluster are cleaned up. Open the ClusterDeployment information for the managed cluster that is being removed by entering the following command: Replace mycluster with the name of the managed cluster that you are destroying. Replace namespace with the namespace of the managed cluster. Remove the hive.openshift.io/deprovision finalizer to forcefully stop the process that is trying to clean up the cluster resources in the cloud. Save your changes and verify that ClusterDeployment is gone. Manually remove the namespace of the managed cluster by running the following command: Replace namespace with the namespace of the managed cluster. 1.4.3.2.10. Cannot upgrade OpenShift Container Platform managed clusters on OpenShift Container Platform Dedicated with the console You cannot use the Red Hat Advanced Cluster Management console to upgrade OpenShift Container Platform managed clusters that are in the OpenShift Container Platform Dedicated environment. 1.4.3.2.11. Work manager add-on search details The search details page for a certain resource on a certain managed cluster might fail. You must ensure that the work-manager add-on in the managed cluster is in Available status before you can search. 1.4.3.2.12. Non-OpenShift Container Platform managed clusters require ManagedServiceAccount or LoadBalancer for pod logs The ManagedServiceAccount and cluster proxy add-ons are enabled by default in Red Hat Advanced Cluster Management version 2.10 and newer. If the add-ons are disabled after upgrading, you must enable the ManagedServiceAccount and cluster proxy add-ons manually to use the pod log feature on non-OpenShift Container Platform managed clusters. See ManagedServiceAccount add-on to learn how to enable ManagedServiceAccount and see Using cluster proxy add-ons to learn how to enable a cluster proxy add-on. 1.4.3.2.13. OpenShift Container Platform 4.10.z does not support hosted control plane clusters with proxy configuration When you create a hosting service cluster with a cluster-wide proxy configuration on OpenShift Container Platform 4.10.z, the nodeip-configuration.service service does not start on the worker nodes. 1.4.3.2.14. Client cannot reach iPXE script iPXE is an open source network boot firmware. See iPXE for more details. When booting a node, the URL length limitation in some DHCP servers cuts off the ipxeScript URL in the InfraEnv custom resource definition, resulting in the following error message in the console: no bootable devices To work around the issue, complete the following steps: Apply the InfraEnv custom resource definition when using an assisted installation to expose the bootArtifacts , which might resemble the following file: Create a proxy server to expose the bootArtifacts with short URLs. Copy the bootArtifacts and add them them to the proxy by running the following commands: Add the ipxeScript artifact proxy URL to the bootp parameter in libvirt.xml . 1.4.3.2.15. Cannot delete ClusterDeployment after upgrading Red Hat Advanced Cluster Management If you are using the removed BareMetalAssets API in Red Hat Advanced Cluster Management 2.6, the ClusterDeployment cannot be deleted after upgrading to Red Hat Advanced Cluster Management 2.7 because the BareMetalAssets API is bound to the ClusterDeployment . To work around the issue, run the following command to remove the finalizers before upgrading to Red Hat Advanced Cluster Management 2.7: 1.4.3.2.16. Managed cluster stuck in Pending status after deployment The converged flow is the default process of provisioning. When you use the BareMetalHost resource for the Bare Metal Operator (BMO) to connect your host to a live ISO, the Ironic Python Agent does the following actions: It runs the steps in the Bare Metal installer-provisioned-infrastructure. It starts the Assisted Installer agent, and the agent handles the rest of the install and provisioning process. If the Assisted Installer agent starts slowly and you deploy a managed cluster, the managed cluster might become stuck in the Pending status and not have any agent resources. You can work around the issue by disabling the converged flow. Important: When you disable the converged flow, only the Assisted Installer agent runs in the live ISO, reducing the number of open ports and disabling any features you enabled with the Ironic Python Agent agent, including the following: Pre-provisioning disk cleaning iPXE boot firmware BIOS configuration To decide what port numbers you want to enable or disable without disabling the converged flow, see Network configuration . To disable the converged flow, complete the following steps: Create the following ConfigMap on the hub cluster: apiVersion: v1 kind: ConfigMap metadata: name: my-assisted-service-config namespace: multicluster-engine data: ALLOW_CONVERGED_FLOW: "false" 1 1 When you set the parameter value to "false", you also disable any features enabled by the Ironic Python Agent. Apply the ConfigMap by running the following command: 1.4.3.2.17. ManagedClusterSet API specification limitation The selectorType: LaberSelector setting is not supported when using the Clustersets API . The selectorType: ExclusiveClusterSetLabel setting is supported. 1.4.3.2.18. The Cluster curator does not support OpenShift Container Platform Dedicated clusters When you upgrade an OpenShift Container Platform Dedicated cluster by using the ClusterCurator resource, the upgrade fails because the Cluster curator does not support OpenShift Container Platform Dedicated clusters. 1.4.3.2.19. Custom ingress domain is not applied correctly You can specify a custom ingress domain by using the ClusterDeployment resource while installing a managed cluster, but the change is only applied after the installation by using the SyncSet resource. As a result, the spec field in the clusterdeployment.yaml file displays the custom ingress domain you specified, but the status still displays the default domain. 1.4.3.2.20. ManagedClusterAddon status becomes stuck If you define configurations in the ManagedClusterAddon to override some configurations in the ClusterManagementAddon , the ManagedClusterAddon might become stuck at the following status: progressing... mca and work configs mismatch When you check the ManagedClusterAddon status, a part of the configurations has an empty spec hash, even if the configurations exist. See the following example: status: conditions: - lastTransitionTime: "2024-09-09T16:08:42Z" message: progressing... mca and work configs mismatch reason: Progressing status: "True" type: Progressing ... configReferences: - desiredConfig: name: deploy-config namespace: open-cluster-management-hub specHash: b81380f1f1a1920388d90859a5d51f5521cecd77752755ba05ece495f551ebd0 group: addon.open-cluster-management.io lastObservedGeneration: 1 name: deploy-config namespace: open-cluster-management-hub resource: addondeploymentconfigs - desiredConfig: name: cluster-proxy specHash: "" group: proxy.open-cluster-management.io lastObservedGeneration: 1 name: cluster-proxy resource: managedproxyconfigurations To resolve the issue, delete the ManagedClusterAddon by running the following command to reinstall and recover the ManagedClusterAddon . Replace <cluster-name> with the ManagedClusterAddon namespace. Replace <addon-name> with the ManagedClusterAddon name: oc -n <cluster-name> delete managedclusteraddon <addon-name> 1.4.3.3. Central infrastructure management 1.4.3.3.1. Cluster provisioning with infrastructure operator for Red Hat OpenShift fails When creating OpenShift Container Platform clusters by using the infrastructure operator for Red Hat OpenShift, the file name of the ISO image might be too long. The long image name causes the image provisioning and the cluster provisioning to fail. To determine if this is the problem, complete the following steps: View the bare metal host information for the cluster that you are provisioning by running the following command: Run the describe command to view the error information: An error similar to the following example indicates that the length of the filename is the problem: If this problem occurs, it is typically on the following versions of OpenShift Container Platform, because the infrastructure operator for Red Hat OpenShift was not using image service: 4.8.17 and earlier 4.9.6 and earlier To avoid this error, upgrade your OpenShift Container Platform to version 4.8.18 or later, or 4.9.7 or later. 1.4.3.3.2. Cannot use host inventory to boot with the discovery image and add hosts automatically You cannot use a host inventory, or InfraEnv custom resource, to both boot with the discovery image and add hosts automatically. If you used your InfraEnv resource for the BareMetalHost resource, and you want to boot the image yourself, you can work around the issue by creating a new InfraEnv resource. 1.4.3.3.3. A single-node OpenShift cluster installation requires a matching OpenShift Container Platform with infrastructure operator for Red Hat OpenShift If you want to install a single-node OpenShift cluster with an Red Hat OpenShift Container Platform version before 4.16, your InfraEnv custom resource and your booted host must use the same OpenShift Container Platform version that you are using to install the single-node OpenShift cluster. The installation fails if the versions do not match. To work around the issue, edit your InfraEnv resource before you boot a host with the Discovery ISO, and include the following content: apiVersion: agent-install.openshift.io/v1beta1 kind: InfraEnv spec: osImageVersion: 4.15 The osImageVersion field must match the Red Hat OpenShift Container Platform cluster version that you want to install. 1.4.3.3.4. tolerations and nodeSelector settings do not affect the managed-serviceaccount agent The tolerations and nodeSelector settings configured on the MultiClusterEngine and MultiClusterHub resources do not affect the managed-serviceaccount agent deployed on the local cluster. The managed-serviceaccount add-on is not always required on the local cluster. If the managed-serviceaccount add-on is required, you can work around the issue by completing the following steps: Create the addonDeploymentConfig custom resource. Set the tolerations and nodeSelector values for the local cluster and managed-serviceaccount agent. Update the managed-serviceaccount ManagedClusterAddon in the local cluster namespace to use the addonDeploymentConfig custom resource you created. See Configuring nodeSelectors and tolerations for klusterlet add-ons to learn more about how to use the addonDeploymentConfig custom resource to configure tolerations and nodeSelector for add-ons. 1.4.3.3.5. Nodes shut down after removing BareMetalHost resource If you remove the BareMetalHost resource from a hub cluster, the nodes shut down. You can manually power on the nodes again. 1.4.4. Deprecations and removals for Cluster lifecycle with multicluster engine operator Learn when parts of the product are deprecated or removed from multicluster engine operator. Consider the alternative actions in the Recommended action and details, which display in the tables for the current release and for two prior releases. Tables are removed if no entries are added for that section this release. Deprecated: multicluster engine operator 2.3 and earlier versions are no longer supported. The documentation might remain available, but without any Errata or other updates. Best practice: Upgrade to the most recent version. 1.4.4.1. API deprecations and removals multicluster engine operator follows the Kubernetes deprecation guidelines for APIs. See the Kubernetes Deprecation Policy for more details about that policy. multicluster engine operator APIs are only deprecated or removed outside of the following timelines: All V1 APIs are generally available and supported for 12 months or three releases, whichever is greater. V1 APIs are not removed, but can be deprecated outside of that time limit. All beta APIs are generally available for nine months or three releases, whichever is greater. Beta APIs are not removed outside of that time limit. All alpha APIs are not required to be supported, but might be listed as deprecated or removed if it benefits users. 1.4.4.1.1. API deprecations Product or category Affected item Version Recommended action More details and links ManagedServiceAccount The v1alpha1 API is upgraded to v1beta1 because v1alpha1 is deprecated. 2.4 Use v1beta1 . None 1.4.4.2. Deprecations Product or category Affected item Version Recommended action More details and links KlusterletConfig The hubKubeAPIServerProxyConfig field is deprecated in the KlusterletConfig spec. 2.7 Use the hubKubeAPIServerConfig.proxyURL and hubKubeAPIServerConfig.trustedCABundles fields. None KlusterletConfig The hubKubeAPIServerURL field is deprecated in the KlusterletConfig spec. 2.7 Use the hubKubeAPIServerConfig.url field. None KlusterletConfig The hubKubeAPIServerCABundle field is deprecated in the KlusterletConfig spec 2.7 Use the hubKubeAPIServerConfig.serverVerificationStrategy and hubKubeAPIServerConfig.trustedCABundles fields. None 1.4.4.3. Removals A removed item is typically function that was deprecated in releases and is no longer available in the product. You must use alternatives for the removed function. Consider the alternative actions in the Recommended action and details that are provided in the following table: Product or category Affected item Version Recommended action More details and links Cluster lifecycle Create cluster on Red Hat Virtualization 2.6 None None Cluster lifecycle Klusterlet Operator Lifecycle Manager Operator 2.6 None None 1.5. Installing and upgrading multicluster engine operator The multicluster engine operator is a software operator that enhances cluster fleet management. The multicluster engine operator supportsRed Hat OpenShift Container Platform and Kubernetes cluster lifecycle management across clouds and data centers. The documentation references the earliest supported OpenShift Container Platform version, unless a specific component or function is introduced and tested only on a more recent version of OpenShift Container Platform. For full support information, see the multicluster engine operator Support matrix . For life cycle information, see Red Hat OpenShift Container Platform Life Cycle policy . Important: If you are using Red Hat Advanced Cluster Management, then multicluster engine for Kubernetes operator is already installed on the cluster. Deprecated: multicluster engine operator 2.3 and earlier versions are no longer supported. The documentation might remain available, but without any Errata or other updates. Best practice: Upgrade to the most recent version. See the following documentation: Installing while connected online Configuring infrastructure nodes for multicluster engine operator Installing on disconnected networks Uninstalling Network configuration Upgrading disconnected clusters using policies MultiClusterEngine advanced configuration multicluster engine operator with Red Hat Advanced Cluster Management integration 1.5.1. Installing while connected online The multicluster engine operator is installed with Operator Lifecycle Manager, which manages the installation, upgrade, and removal of the components that encompass the multicluster engine operator. Required access: Cluster administrator Important: You cannot install multicluster engine operator on a cluster that has a ManagedCluster resource configured in an external cluster. You must remove the ManagedCluster resource from the external cluster before you can install multicluster engine operator. For OpenShift Container Platform Dedicated environment, you must have cluster-admin permissions. By default dedicated-admin role does not have the required permissions to create namespaces in the OpenShift Container Platform Dedicated environment. By default, the multicluster engine operator components are installed on worker nodes of your OpenShift Container Platform cluster without any additional configuration. You can install multicluster engine operator onto worker nodes by using the OpenShift Container Platform OperatorHub web console interface, or by using the OpenShift Container Platform CLI. If you configured your OpenShift Container Platform cluster with infrastructure nodes, you can install multicluster engine operator onto those infrastructure nodes by using the OpenShift Container Platform CLI with additional resource parameters. See the Installing multicluster engine on infrastructure nodes section for those details. If you plan to import Kubernetes clusters that were not created by OpenShift Container Platform or multicluster engine operator, you need to configure an image pull secret. For information about how to configure an image pull secret and other advanced configurations, see options in the Advanced configuration section of this documentation. Prerequisites Confirm your OpenShift Container Platform installation Installing from the OperatorHub web console interface Installing from the OpenShift Container Platform CLI 1.5.1.1. Prerequisites Before you install multicluster engine operator, see the following prerequisites: Your OpenShift Container Platform cluster must have access to the multicluster engine operator in the OperatorHub catalog from the console. You need access to the catalog.redhat.com . Your cluster does not have a ManagedCluster resource configured in an external cluster. A supported version of OpenShift Container Platform must be deployed in your environment, and you must be logged into with the OpenShift Container Platform CLI. See the following install documentation: OpenShift Container Platform Installing Your OpenShift Container Platform command line interface (CLI) must be configured to run oc commands. See Getting started with the CLI for information about installing and configuring the OpenShift Container Platform CLI. Your OpenShift Container Platform permissions must allow you to create a namespace. To install in a OpenShift Container Platform Dedicated environment, see the following: You must have the OpenShift Container Platform Dedicated environment configured and running. You must have cluster-admin authority to the OpenShift Container Platform Dedicated environment where you are installing the engine. If you plan to create managed clusters by using the Assisted Installer that is provided with Red Hat OpenShift Container Platform, see Preparing to install with the Assisted Installer topic in the OpenShift Container Platform documentation for the requirements. 1.5.1.2. Confirm your OpenShift Container Platform installation You must have a supported OpenShift Container Platform version, including the registry and storage services, installed and working. For more information about installing OpenShift Container Platform, see the OpenShift Container Platform documentation. Verify that multicluster engine operator is not already installed on your OpenShift Container Platform cluster. The multicluster engine operator allows only one single installation on each OpenShift Container Platform cluster. Continue with the following steps if there is no installation. To ensure that the OpenShift Container Platform cluster is set up correctly, access the OpenShift Container Platform web console with the following command: See the following example output: Open the URL in your browser and check the result. If the console URL displays console-openshift-console.router.default.svc.cluster.local , set the value for openshift_master_default_subdomain when you install OpenShift Container Platform. See the following example of a URL: https://console-openshift-console.apps.new-coral.purple-chesterfield.com . You can proceed to install multicluster engine operator. 1.5.1.3. Installing from the OperatorHub web console interface Best practice: From the Administrator view in your OpenShift Container Platform navigation, install the OperatorHub web console interface that is provided with OpenShift Container Platform. Select Operators > OperatorHub to access the list of available operators, and select multicluster engine for Kubernetes operator. Click Install . On the Operator Installation page, select the options for your installation: Namespace: The multicluster engine operator engine must be installed in its own namespace, or project. By default, the OperatorHub console installation process creates a namespace titled multicluster-engine . Best practice: Continue to use the multicluster-engine namespace if it is available. If there is already a namespace named multicluster-engine , select a different namespace. Channel: The channel that you select corresponds to the release that you are installing. When you select the channel, it installs the identified release, and establishes that the future errata updates within that release are obtained. Approval strategy: The approval strategy identifies the human interaction that is required for applying updates to the channel or release to which you subscribed. Select Automatic , which is selected by default, to ensure any updates within that release are automatically applied. Select Manual to receive a notification when an update is available. If you have concerns about when the updates are applied, this might be best practice for you. Note: To upgrade to the minor release, you must return to the OperatorHub page and select a new channel for the more current release. Select Install to apply your changes and create the operator. See the following process to create the MultiClusterEngine custom resource. In the OpenShift Container Platform console navigation, select Installed Operators > multicluster engine for Kubernetes . Select the MultiCluster Engine tab. Select Create MultiClusterEngine . Update the default values in the YAML file. See options in the MultiClusterEngine advanced configuration section of the documentation. The following example shows the default template that you can copy into the editor: apiVersion: multicluster.openshift.io/v1 kind: MultiClusterEngine metadata: name: multiclusterengine spec: {} Select Create to initialize the custom resource. It can take up to 10 minutes for the multicluster engine operator engine to build and start. After the MultiClusterEngine resource is created, the status for the resource is Available on the MultiCluster Engine tab. 1.5.1.4. Installing from the OpenShift Container Platform CLI Create a multicluster engine operator engine namespace where the operator requirements are contained. Run the following command, where namespace is the name for your multicluster engine for Kubernetes operator namespace. The value for namespace might be referred to as Project in the OpenShift Container Platform environment: Switch your project namespace to the one that you created. Replace namespace with the name of the multicluster engine for Kubernetes operator namespace that you created in step 1. Create a YAML file to configure an OperatorGroup resource. Each namespace can have only one operator group. Replace default with the name of your operator group. Replace namespace with the name of your project namespace. See the following example: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <default> namespace: <namespace> spec: targetNamespaces: - <namespace> Run the following command to create the OperatorGroup resource. Replace operator-group with the name of the operator group YAML file that you created: Create a YAML file to configure an OpenShift Container Platform Subscription. Your file appears similar to the following example: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: multicluster-engine spec: sourceNamespace: openshift-marketplace source: redhat-operators channel: stable-2.7 installPlanApproval: Automatic name: multicluster-engine Note: To configure infrastructure nodes, see Configuring infrastructure nodes for multicluster engine operator . = + . Run the following command to create the OpenShift Container Platform Subscription. Replace subscription with the name of the subscription file that you created: + Create a YAML file to configure the MultiClusterEngine custom resource. Your default template should look similar to the following example: apiVersion: multicluster.openshift.io/v1 kind: MultiClusterEngine metadata: name: multiclusterengine spec: {} Note: For installing the multicluster engine operator on infrastructure nodes, see the MultiClusterEngine custom resource additional configuration section: Run the following command to create the MultiClusterEngine custom resource. Replace custom-resource with the name of your custom resource file: If this step fails with the following error, the resources are still being created and applied. Run the command again in a few minutes when the resources are created: Run the following command to get the custom resource. It can take up to 10 minutes for the MultiClusterEngine custom resource status to display as Available in the status.phase field after you run the following command: If you are reinstalling the multicluster engine operator and the pods do not start, see Troubleshooting reinstallation failure for steps to work around this problem. Notes: A ServiceAccount with a ClusterRoleBinding automatically gives cluster administrator privileges to multicluster engine operator and to any user credentials with access to the namespace where you install multicluster engine operator. You can now configure your OpenShift Container Platform cluster to contain infrastructure nodes to run approved management components. Running components on infrastructure nodes avoids allocating OpenShift Container Platform subscription quota for the nodes that are running those management components. See Configuring infrastructure nodes for multicluster engine operator for that procedure. 1.5.2. Configuring infrastructure nodes for multicluster engine operator Configure your OpenShift Container Platform cluster to contain infrastructure nodes to run approved multicluster engine operator management components. Running components on infrastructure nodes avoids allocating OpenShift Container Platform subscription quota for the nodes that are running multicluster engine operator management components. After adding infrastructure nodes to your OpenShift Container Platform cluster, follow the Installing from the OpenShift Container Platform CLI instructions and add the following configurations to the Operator Lifecycle Manager Subscription and MultiClusterEngine custom resource. 1.5.2.1. Configuring infrastructure nodes to the OpenShift Container Platform cluster Follow the procedures that are described in Creating infrastructure machine sets in the OpenShift Container Platform documentation. Infrastructure nodes are configured with a Kubernetes taints and labels to keep non-management workloads from running on them. For compatibility with the infrastructure node enablement, which is provided by multicluster engine operator, ensure your infrastructure nodes have the following taints and labels applied: metadata: labels: node-role.kubernetes.io/infra: "" spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/infra 1.5.2.2. Operator Lifecycle Manager subscription configuration Configure your Operator Lifecycle Manager subscription. Add the following additional configuration before applying the Operator Lifecycle Manager Subscription: spec: config: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra effect: NoSchedule operator: Exists Update any add-ons to include the following node selectors and tolerations. See Configuring nodeSelectors and tolerations for klusterlet add-ons . nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra effect: NoSchedule operator: Exists If you used Red Hat OpenShift Data Foundation as a storage provisioner, make sure Container Storage Interface pods can run on infrastructure nodes. Learn more at Managing container storage interface (CSI) component placements in the Red Hat OpenShift Data Foundation documentation. 1.5.2.3. MultiClusterEngine custom resource additional configuration Add the following additional configuration before applying the MultiClusterEngine custom resource: spec: nodeSelector: node-role.kubernetes.io/infra: "" 1.5.3. Install on disconnected networks You might need to install the multicluster engine operator on Red Hat OpenShift Container Platform clusters that are not connected to the Internet. The procedure to install on a disconnected engine requires some of the same steps as the connected installation. Important: You must install multicluster engine operator on a cluster that does not have Red Hat Advanced Cluster Management for Kubernetes earlier than 2.5 installed. The multicluster engine operator cannot co-exist with Red Hat Advanced Cluster Management for Kubernetes on versions earlier than 2.5 because they provide some of the same management components. It is recommended that you install multicluster engine operator on a cluster that has never previously installed Red Hat Advanced Cluster Management. If you are using Red Hat Advanced Cluster Management for Kubernetes at version 2.5.0 or later then multicluster engine operator is already installed on the cluster with it. You must download copies of the packages to access them during the installation, rather than accessing them directly from the network during the installation. Prerequisites Confirm your OpenShift Container Platform installation Installing in a disconnected environment 1.5.3.1. Prerequisites You must meet the following requirements before you install The multicluster engine operator: A supported OpenShift Container Platform version must be deployed in your environment, and you must be logged in with the command line interface (CLI). You need access to catalog.redhat.com . Note: For managing bare metal clusters, you need a supported OpenShift Container Platform version. See the OpenShift Container Platform Installing . Your Red Hat OpenShift Container Platform permissions must allow you to create a namespace. You must have a workstation with Internet connection to download the dependencies for the operator. 1.5.3.2. Confirm your OpenShift Container Platform installation You must have a supported OpenShift Container Platform version, including the registry and storage services, installed and working in your cluster. For information about OpenShift Container Platform, see OpenShift Container Platform documentation . When and if you are connected, accessing the OpenShift Container Platform web console with the following command to verify: See the following example output: The console URL in this example is: https:// console-openshift-console.apps.new-coral.purple-chesterfield.com . Open the URL in your browser and check the result. If the console URL displays console-openshift-console.router.default.svc.cluster.local , set the value for openshift_master_default_subdomain when you install OpenShift Container Platform. 1.5.3.3. Installing in a disconnected environment Important: You need to download the required images to a mirroring registry to install the operators in a disconnected environment. Without the download, you might receive ImagePullBackOff errors during your deployment. Follow these steps to install the multicluster engine operator in a disconnected environment: Create a mirror registry. If you do not already have a mirror registry, create one by completing the procedure in the Disconnected installation mirroring topic of the Red Hat OpenShift Container Platform documentation. If you already have a mirror registry, you can configure and use your existing one. Note: For bare metal only, you need to provide the certificate information for the disconnected registry in your install-config.yaml file. To access the image in a protected disconnected registry, you must provide the certificate information so the multicluster engine operator can access the registry. Copy the certificate information from the registry. Open the install-config.yaml file in an editor. Find the entry for additionalTrustBundle: | . Add the certificate information after the additionalTrustBundle line. The resulting content should look similar to the following example: additionalTrustBundle: | -----BEGIN CERTIFICATE----- certificate_content -----END CERTIFICATE----- sshKey: >- Important: Additional mirrors for disconnected image registries are needed if the following Governance policies are required: Container Security Operator policy: Locate the images in the registry.redhat.io/quay source. Compliance Operator policy: Locate the images in the registry.redhat.io/compliance source. Gatekeeper Operator policy: Locate the images in the registry.redhat.io/gatekeeper source. See the following example of mirrors lists for all three operators: - mirrors: - <your_registry>/rhacm2 source: registry.redhat.io/rhacm2 - mirrors: - <your_registry>/quay source: registry.redhat.io/quay - mirrors: - <your_registry>/compliance source: registry.redhat.io/compliance Save the install-config.yaml file. Create a YAML file that contains the ImageContentSourcePolicy with the name mce-policy.yaml . Note: If you modify this on a running cluster, it causes a rolling restart of all nodes. apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: mce-repo spec: repositoryDigestMirrors: - mirrors: - mirror.registry.com:5000/multicluster-engine source: registry.redhat.io/multicluster-engine Apply the ImageContentSourcePolicy file by entering the following command: Enable the disconnected Operator Lifecycle Manager Red Hat Operators and Community Operators. the multicluster engine operator is included in the Operator Lifecycle Manager Red Hat Operator catalog. Configure the disconnected Operator Lifecycle Manager for the Red Hat Operator catalog. Follow the steps in the Using Operator Lifecycle Manager on restricted networks topic of theRed Hat OpenShift Container Platform documentation. Continue to install the multicluster engine operator for Kubernetes from the Operator Lifecycle Manager catalog. See Installing while connected online for the required steps. 1.5.4. Upgrading disconnected clusters by using policies If you have the Red Hat Advanced Cluster Management for Kubernetes hub cluster, which uses the MultiClusterHub operator to manage, upgrade, and install hub cluster components, you can use OpenShift Update Service with Red Hat Advanced Cluster Management policies to upgrade multiple clusters in a disconnected environment. OpenShift Update Service is a separate operator and operand that monitors the available versions of your managed clusters and makes them available for upgrading in a disconnected environment. OpenShift Update Service can perform the following actions: Monitor when upgrades are available for your disconnected clusters. Identify which updates are mirrored to your local site for upgrading by using the graph data file. Notify you that an upgrade is available for your cluster by using the console. Prerequisites Prepare your disconnected mirror registry Deploy the operator for OpenShift Update Service Build the graph data init container Configuring the certificate for the mirrored registry Deploy the OpenShift Update Service instance Optional: Deploying a policy to override the default registry Deploying a policy to deploy a disconnected catalog source Deploying a policy to change the managed cluster parameter Viewing available upgrades Selecting a channel Upgrading the cluster Additional resources See Configuring additional trust stores for image registry access in the OpenShift Container Platform documentation to learn more about the external registry CA certificate. 1.5.4.1. Prerequisites You must have the following prerequisites before you can use OpenShift Update Service to upgrade your disconnected clusters: You need to install Red Hat Advanced Cluster Management. See the Red Hat Advanced Cluster Management Installing and upgrading documentation. You need a hub cluster that is running on a supported Red Hat OpenShift Container Platform version with restricted OLM configured. See Using Operator Lifecycle Manager on restricted networks for details about how to configure restricted OLM. Take note of the catalog source image when you configure restricted OLM. You need an OpenShift Container Platform cluster that the hub cluster manages. You need access credentials to a local repository where you can mirror the cluster images. See Disconnected installation mirroring for more information. Note: The image for the current version of the cluster that you upgrade must remain available as one of the mirrored images. If an upgrade fails, the cluster reverts back to the version of the cluster when you tried to upgrade. 1.5.4.2. Preparing your disconnected mirror registry You must mirror both the image that you want to upgrade to and the current image that you are upgrading from to your local mirror registry. Complete the following steps to mirror the images: Create a script file with content that resembles the following example. Replace <pull-secret> with the path to your OpenShift Container Platform pull secret: UPSTREAM_REGISTRY=quay.io PRODUCT_REPO=openshift-release-dev RELEASE_NAME=ocp-release OCP_RELEASE=4.15.2-x86_64 LOCAL_REGISTRY=USD(hostname):5000 LOCAL_SECRET_JSON=<pull-secret> oc adm -a USD{LOCAL_SECRET_JSON} release mirror \ --from=USD{UPSTREAM_REGISTRY}/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE} \ --to=USD{LOCAL_REGISTRY}/ocp4 \ --to-release-image=USD{LOCAL_REGISTRY}/ocp4/release:USD{OCP_RELEASE} Run the script to mirror the images, configure settings, and separate the release images from the release content. 1.5.4.3. Deploying the operator for OpenShift Update Service To deploy the operator for OpenShift Update Service in your OpenShift Container Platform environment, complete the following steps: On your hub cluster, access the OpenShift Container Platform operator hub. Deploy the operator by selecting OpenShift Update Service Operator and update the default values if needed. The deployment of the operator creates a new project named openshift-update-service . Wait for the installation of the operator to finish. You can check the status of the installation by running the oc get pods command. Verify that the operator is in the running state. 1.5.4.4. Building the graph data init container OpenShift Update Service uses graph data information to find the available upgrades. In a connected environment, OpenShift Update Service pulls the graph data information for available upgrades directly from the update-service graph data GitHub repository . In a disconnected environment, you must make the graph data available in a local repository by using an init container . Complete the following steps to create a graph data init container : Clone the graph data Git repository by running the following command: git clone https://github.com/openshift/cincinnati-graph-data Create a file that has the information for your graph data init . You can find a sample Dockerfile in the cincinnati-operator GitHub repository. The FROM value is the external registry where OpenShift Update Service finds the images. The RUN commands create the directory and package the upgrade files. The CMD command copies the package file to the local repository and extracts the files for an upgrade. Run the following command to build the graph data init container : podman build -f <docker-path> -t <graph-path>:latest Replace <docker-path> with the path to the file that you created in the step. Replace <graph-path> with the path to your local graph data init container. Run the following command to push the graph data init container : podman push <graph-path>:latest --authfile=<pull-secret>.json Replace <graph-path> with the path to your local graph data init container. Replace <pull-secret> with the path to your pull secret file. Optional: If you do not have podman installed, replace podman with docker in step three and four. 1.5.4.5. Configuring the certificate for the mirrored registry If you are using a secure external container registry to store your mirrored OpenShift Container Platform release images, OpenShift Update Service requires access to this registry to build an upgrade graph. Complete the following steps to configure your CA certificate to work with the OpenShift Update Service pod: Find the OpenShift Container Platform external registry API, which is located in image.config.openshift.io . This is where the external registry CA certificate is stored. See Configuring additional trust stores for image registry access in the additional resources section to learn more. Create a ConfigMap in the openshift-config namespace and add your CA certificate in the updateservice-registry section. See the following example: apiVersion: v1 kind: ConfigMap metadata: name: trusted-ca data: updateservice-registry: | -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- Edit the cluster resource in the image.config.openshift.io API to set the additionalTrustedCA field to the name of the ConfigMap that you created. Run the following command and replace <trusted_ca> with the path to your new ConfigMap: The OpenShift Update Service Operator watches the image.config.openshift.io API and the ConfigMap you created in the openshift-config namespace for changes, then restarts the deployment if the CA cert has changed. 1.5.4.6. Deploying the OpenShift Update Service instance When you finish deploying the OpenShift Update Service instance on your hub cluster, the instance is located where the images for the cluster upgrades are mirrored and made available to the disconnected managed cluster. Complete the following steps to deploy the instance: If you do not want to use the default namespace of the operator, navigate to Administration > Namespaces in the console to change it. In the Installed Operators section of the OpenShift Container Platform console, select OpenShift Update Service Operator . Select Create Instance in the menu. Paste the contents from your OpenShift Update Service instance. Your YAML instance might resemble the following manifest: apiVersion: update-service.openshift.io/v1beta2 kind: update-service metadata: name: openshift-cincinnati-instance namespace: openshift-update-service spec: registry: <registry-host-name>:<port> 1 replicas: 1 repository: USD{LOCAL_REGISTRY}/ocp4/release graphDataImage: '<host-name>:<port>/cincinnati-graph-data-container' 2 1 Replace with the path to your local disconnected registry for your images. 2 Replace with the path to your graph data init container. This is the same value that you used when you ran the podman push command to push your graph data init container. Select Create to create the instance. From the hub cluster CLI, enter the oc get pods command to view the status of the instance creation. It might take a few minutes. The process is complete when the result of the command shows that the instance and the operator are running. 1.5.4.7. Optional: Deploying a policy to override the default registry The following steps only apply if you have mirrored your releases into your mirrored registry. Deprecated: PlacementRule OpenShift Container Platform has a default image registry value that specifies where it finds the upgrade packages. In a disconnected environment, you can create a policy to replace that value with the path to your local image registry where you mirrored your release images. Complete the following steps to create the policy: Log in to the OpenShift Container Platform environment of your hub cluster. From the console, select Governance > Create policy . Set the YAML switch to On to view the YAML version of the policy. Delete all of the content in the YAML code. Paste the following YAML content into the window to create a custom policy: apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: policy-mirror namespace: default spec: disabled: false remediationAction: enforce policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: policy-image-content-source-policy spec: object-templates: - complianceType: musthave objectDefinition: apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: <your-local-mirror-name> 1 spec: repositoryDigestMirrors: - mirrors: - <your-registry> 2 source: registry.redhat.io --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: binding-policy-mirror namespace: default placementRef: name: placement-policy-mirror kind: PlacementRule apiGroup: apps.open-cluster-management.io subjects: - name: policy-mirror kind: Policy apiGroup: policy.open-cluster-management.io --- apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: placement-policy-mirror namespace: default spec: clusterSelector: matchExpressions: [] 3 1 Replace with your local mirror name. 2 Replace with the path to your local mirror repository. You can find the path to your local mirror by running the oc adm release mirror command. 3 Selects all clusters if not specified. Select Enforce if supported . Select Create to create the policy. 1.5.4.8. Deploying a policy to deploy a disconnected catalog source You can push the Catalogsource policy to the managed cluster to change the default location from a connected location to your disconnected local registry. Complete the following steps to change the default location: In the console menu, select Governance > Create policy . Set the YAML switch to On to view the YAML version of the policy. Delete all of the content in the YAML code. Paste the following YAML content into the window to create a custom policy: apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: policy-catalog namespace: default spec: disabled: false remediationAction: enforce policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: policy-catalog spec: object-templates: - complianceType: musthave objectDefinition: apiVersion: config.openshift.io/v1 kind: OperatorHub metadata: name: cluster spec: disableAllDefaultSources: true - complianceType: musthave objectDefinition: apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog namespace: openshift-marketplace spec: sourceType: grpc image: '<registry_host_name>:<port>/olm/redhat-operators:v1' 1 displayName: My Operator Catalog publisher: grpc --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: binding-policy-catalog namespace: default placementRef: name: placement-policy-catalog kind: PlacementRule apiGroup: apps.open-cluster-management.io subjects: - name: policy-catalog kind: Policy apiGroup: policy.open-cluster-management.io --- apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: placement-policy-catalog namespace: default spec: clusterSelector: matchExpressions: [] 2 1 Replace with the path to your local restricted catalog source image. 2 Selects all clusters if not specified. Select Enforce if supported . Select Create to create the policy. 1.5.4.9. Deploying a policy to change the managed cluster parameter You can push the ClusterVersion policy to the managed cluster to change the default location where it retrieves its upgrades. Complete the following steps: From the managed cluster, confirm that the ClusterVersion upstream parameter is currently the default public OpenShift Update Service operand by running the following command: oc get clusterversion -o yaml From the hub cluster, identify the route URL to the OpenShift Update Service operand by running the following command: oc get routes Remember the result for later. In the hub cluster console menu, select Governance > Create a policy . Set the YAML switch to On to view the YAML version of the policy. Delete all of the content in the YAML code. Paste the following YAML content into the window to create a custom policy: apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: policy-cluster-version namespace: default annotations: policy.open-cluster-management.io/standards: null policy.open-cluster-management.io/categories: null policy.open-cluster-management.io/controls: null spec: disabled: false remediationAction: enforce policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: policy-cluster-version spec: object-templates: - complianceType: musthave objectDefinition: apiVersion: config.openshift.io/v1 kind: ClusterVersion metadata: name: version spec: channel: stable-4.4 upstream: >- https://example-cincinnati-policy-engine-uri/api/upgrades_info/v1/graph 1 --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: binding-policy-cluster-version namespace: default placementRef: name: placement-policy-cluster-version kind: PlacementRule apiGroup: apps.open-cluster-management.io subjects: - name: policy-cluster-version kind: Policy apiGroup: policy.open-cluster-management.io --- apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: placement-policy-cluster-version namespace: default spec: clusterSelector: matchExpressions: [] 2 1 Replace with the path to your hub cluster OpenShift Update Service operand. Selects all clusters if not specified. You can complete the following steps to determine the path to the operand: Run the oc get get routes -A command on the hub cluster. Find the route to update-service . The path to the operand is the value in the HOST/PORT field. Select Enforce if supported . Select Create to create the policy. In the managed cluster CLI, confirm that the upstream parameter in the ClusterVersion is updated with the local hub cluster OpenShift Update Service URL by running the following command: oc get clusterversion -o yaml Verify that the results resemble the following content: apiVersion: v1 items: - apiVersion: config.openshift.io/v1 kind: ClusterVersion [..] spec: channel: stable-4.4 upstream: https://<hub-cincinnati-uri>/api/upgrades_info/v1/graph 1.5.4.10. Viewing available upgrades You can view a list of available upgrades for your managed cluster by completing the following steps: From the console, select Infrastructure > Clusters . Select a cluster that is in the Ready state. From the Actions menu, select Upgrade cluster . Verify that the optional upgrade paths are available. Note: No available upgrade versions are shown if the current version is not mirrored into the local image repository. 1.5.4.11. Selecting a channel You can use the Red Hat Advanced Cluster Management console to select a channel for your cluster upgrades on OpenShift Container Platform. Those versions must be available on the mirror registry. Complete the steps in Selecting a channel to specify a channel for your upgrades. 1.5.4.12. Upgrading the cluster After configuring the disconnected registry, Red Hat Advanced Cluster Management and OpenShift Update Service use the disconnected registry to find if upgrades are available. If no available upgrades are displayed, make sure that you have the release image of the current level of the cluster and at least one later level mirrored in the local repository. If the release image for the current version of the cluster is not available, no upgrades are available. Complete the following steps to upgrade: In the console, select Infrastructure > Clusters . Find the cluster that you want to choose if there is an available upgrade. If there is an upgrade available, the Distribution version column for the cluster shows an upgrade available. Select the Options menu for the cluster, and select Upgrade cluster . Select the target version for the upgrade, and select Upgrade . If your cluster upgrade fails, the Operator generally retries the upgrade a few times, stops, and reports the status of the failing component. In some cases, the upgrade process continues to cycle through attempts to complete the process. Rolling your cluster back to a version after a failed upgrade is not supported. Contact Red Hat support for assistance if your cluster upgrade fails. 1.5.4.12.1. Additional resources See Configuring additional trust stores for image registry access in the OpenShift Container Platform documentation to learn more about the external registry CA certificate. 1.5.5. Advanced configuration The multicluster engine operator is installed using an operator that deploys all of the required components. The multicluster engine operator can be further configured during or after installation. Learn more about the advanced configuration options. 1.5.5.1. Deployed components Add one or more of the following attributes to the MultiClusterEngine custom resource: Table 1.3. Table list of the deployed components Name Description Enabled assisted-service Installs OpenShift Container Platform with minimal infrastructure prerequisites and comprehensive pre-flight validations True cluster-lifecycle Provides cluster management capabilities for OpenShift Container Platform and Kubernetes hub clusters True cluster-manager Manages various cluster-related operations within the cluster environment True cluster-proxy-addon Automates the installation of apiserver-network-proxy on both hub and managed clusters using a reverse proxy server True console-mce Enables the multicluster engine operator console plug-in True discovery Discovers and identifies new clusters within the OpenShift Cluster Manager True hive Provisions and performs initial configuration of OpenShift Container Platform clusters True hypershift Hosts OpenShift Container Platform control planes at scale with cost and time efficiency, and cross-cloud portability True hypershift-local-hosting Enables local hosting capabilities for within the local cluster environment True image-based-install-operator Provides site configuration to single-node OpenShift clusters to complete installation False local-cluster Enables the import and self-management of the local hub cluster where the multicluster engine operator is deployed True managedserviceacccount Synchronizes service accounts to managed clusters, and collects tokens as secret resources to give back to the hub cluster True server-foundation Provides foundational services for server-side operations within the multicluster environment True When you install multicluster engine operator on to the cluster, not all of the listed components are enabled by default. You can further configure multicluster engine operator during or after installation by adding one or more attributes to the MultiClusterEngine custom resource. Continue reading for information about the attributes that you can add. 1.5.5.2. Console and component configuration The following example displays the spec.overrides default template that you can use to enable or disable the component: apiVersion: operator.open-cluster-management.io/v1 kind: MultiClusterEngine metadata: name: multiclusterengine spec: overrides: components: - name: <name> 1 enabled: true Replace name with the name of the component. Alternatively, you can run the following command. Replace namespace with the name of your project and name with the name of the component: 1.5.5.3. Local-cluster enablement By default, the cluster that is running multicluster engine operator manages itself. To install multicluster engine operator without the cluster managing itself, specify the following values in the spec.overrides.components settings in the MultiClusterEngine section: apiVersion: multicluster.openshift.io/v1 kind: MultiClusterEngine metadata: name: multiclusterengine spec: overrides: components: - name: local-cluster enabled: false The name value identifies the hub cluster as a local-cluster . The enabled setting specifies whether the feature is enabled or disabled. When the value is true , the hub cluster manages itself. When the value is false , the hub cluster does not manage itself. A hub cluster that is managed by itself is designated as the local-cluster in the list of clusters. 1.5.5.4. Custom image pull secret If you plan to import Kubernetes clusters that were not created by OpenShift Container Platform or the multicluster engine operator, generate a secret that contains your OpenShift Container Platform pull secret information to access the entitled content from the distribution registry. The secret requirements for OpenShift Container Platform clusters are automatically resolved by OpenShift Container Platform and multicluster engine for Kubernetes operator, so you do not have to create the secret if you are not importing other types of Kubernetes clusters to be managed. Important: These secrets are namespace-specific, so make sure that you are in the namespace that you use for your engine. Download your OpenShift Container Platform pull secret file from cloud.redhat.com/openshift/install/pull-secret by selecting Download pull secret . Your OpenShift Container Platform pull secret is associated with your Red Hat Customer Portal ID, and is the same across all Kubernetes providers. Run the following command to create your secret: Replace secret with the name of the secret that you want to create. Replace namespace with your project namespace, as the secrets are namespace-specific. Replace path-to-pull-secret with the path to your OpenShift Container Platform pull secret that you downloaded. The following example displays the spec.imagePullSecret template to use if you want to use a custom pull secret. Replace secret with the name of your pull secret: apiVersion: multicluster.openshift.io/v1 kind: MultiClusterEngine metadata: name: multiclusterengine spec: imagePullSecret: <secret> 1.5.5.5. Target namespace The operands can be installed in a designated namespace by specifying a location in the MultiClusterEngine custom resource. This namespace is created upon application of the MultiClusterEngine custom resource. Important: If no target namespace is specified, the operator will install to the multicluster-engine namespace and will set it in the MultiClusterEngine custom resource specification. The following example displays the spec.targetNamespace template that you can use to specify a target namespace. Replace target with the name of your destination namespace. Note: The target namespace cannot be the default namespace: apiVersion: multicluster.openshift.io/v1 kind: MultiClusterEngine metadata: name: multiclusterengine spec: targetNamespace: <target> 1.5.5.6. availabilityConfig The hub cluster has two availabilities: High and Basic . By default, the hub cluster has an availability of High , which gives hub cluster components a replicaCount of 2 . This provides better support in cases of failover but consumes more resources than the Basic availability, which gives components a replicaCount of 1 . Important: Set spec.availabilityConfig to Basic if you are using multicluster engine operator on a single-node OpenShift cluster. The following examples shows the spec.availabilityConfig template with Basic availability: apiVersion: multicluster.openshift.io/v1 kind: MultiClusterEngine metadata: name: multiclusterengine spec: availabilityConfig: "Basic" 1.5.5.7. nodeSelector You can define a set of node selectors in the MultiClusterEngine to install to specific nodes on your cluster. The following example shows spec.nodeSelector to assign pods to nodes with the label node-role.kubernetes.io/infra : spec: nodeSelector: node-role.kubernetes.io/infra: "" To define a set of node selectors for the Red Hat Advanced Cluster Management for Kubernetes hub cluster, see nodeSelector in the product documentation. 1.5.5.8. tolerations You can define a list of tolerations to allow the MultiClusterEngine to tolerate specific taints defined on the cluster. The following example shows a spec.tolerations that matches a node-role.kubernetes.io/infra taint: spec: tolerations: - key: node-role.kubernetes.io/infra effect: NoSchedule operator: Exists The infra-node toleration is set on pods by default without specifying any tolerations in the configuration. Customizing tolerations in the configuration will replace this default behavior. To define a list of tolerations for the Red Hat Advanced Cluster Management for Kubernetes hub cluster, see tolerations in the product documentation. 1.5.5.9. ManagedServiceAccount add-on The ManagedServiceAccount add-on allows you to create or delete a service account on a managed cluster. To install with this add-on enabled, include the following in the MultiClusterEngine specification in spec.overrides : apiVersion: multicluster.openshift.io/v1 kind: MultiClusterEngine metadata: name: multiclusterengine spec: overrides: components: - name: managedserviceaccount enabled: true The ManagedServiceAccount add-on can be enabled after creating MultiClusterEngine by editing the resource on the command line and setting the managedserviceaccount component to enabled: true . Alternatively, you can run the following command and replace <multiclusterengine-name> with the name of your MultiClusterEngine resource. 1.5.6. Uninstalling When you uninstall multicluster engine for Kubernetes operator, you see two different levels of the process: A custom resource removal and a complete operator uninstall . It might take up to five minutes to complete the uninstall process. The custom resource removal is the most basic type of uninstall that removes the custom resource of the MultiClusterEngine instance but leaves other required operator resources. This level of uninstall is helpful if you plan to reinstall using the same settings and components. The second level is a more complete uninstall that removes most operator components, excluding components such as custom resource definitions. When you continue with this step, it removes all of the components and subscriptions that were not removed with the custom resource removal. After this uninstall, you must reinstall the operator before reinstalling the custom resource. 1.5.6.1. Prerequisite: Detach enabled services Before you uninstall the multicluster engine for Kubernetes operator, you must detach all of the clusters that are managed by that engine. To avoid errors, detach all clusters that are still managed by the engine, then try to uninstall again. If you have managed clusters attached, you might see the following message. For more information about detaching clusters, see the Removing a cluster from management section by selecting the information for your provider in Creating clusters . 1.5.6.2. Removing resources by using commands If you have not already. ensure that your OpenShift Container Platform CLI is configured to run oc commands. See Getting started with the OpenShift CLI in the OpenShift Container Platform documentation for more information about how to configure the oc commands. Change to your project namespace by entering the following command. Replace namespace with the name of your project namespace: Enter the following command to remove the MultiClusterEngine custom resource: You can view the progress by entering the following command: Enter the following commands to delete the multicluster-engine ClusterServiceVersion in the namespace it is installed in: The CSV version shown here may be different. 1.5.6.3. Deleting the components by using the console When you use the RedHat OpenShift Container Platform console to uninstall, you remove the operator. Complete the following steps to uninstall by using the console: In the OpenShift Container Platform console navigation, select Operators > Installed Operators > multicluster engine for Kubernetes . Remove the MultiClusterEngine custom resource. Select the tab for Multiclusterengine . Select the Options menu for the MultiClusterEngine custom resource. Select Delete MultiClusterEngine . Run the clean-up script according to the procedure in the following section. Tip: If you plan to reinstall the same multicluster engine for Kubernetes operator version, you can skip the rest of the steps in this procedure and reinstall the custom resource. Navigate to Installed Operators . Remove the _ multicluster engine for Kubernetes_ operator by selecting the Options menu and selecting Uninstall operator . 1.5.6.4. Troubleshooting Uninstall If the multicluster engine custom resource is not being removed, remove any potential remaining artifacts by running the clean-up script. Copy the following script into a file: See Disconnected installation mirroring for more information. 1.6. Managing credentials A credential is required to create and manage a Red Hat OpenShift Container Platform cluster on a cloud service provider with multicluster engine operator. The credential stores the access information for a cloud provider. Each provider account requires its own credential, as does each domain on a single provider. You can create and manage your cluster credentials. Credentials are stored as Kubernetes secrets. Secrets are copied to the namespace of a managed cluster so that the controllers for the managed cluster can access the secrets. When a credential is updated, the copies of the secret are automatically updated in the managed cluster namespaces. Note: Changes to the pull secret, SSH keys, or base domain of the cloud provider credentials are not reflected for existing managed clusters, as they have already been provisioned using the original credentials. Required access: Edit Creating a credential for Amazon Web Services Creating a credential for Microsoft Azure Creating a credential for Google Cloud Platform Creating a credential for VMware vSphere Creating a credential for Red Hat OpenStack Platform Creating a credential for Red Hat OpenShift Cluster Manager Creating a credential for Ansible Automation Platform Creating a credential for an on-premises environment 1.6.1. Creating a credential for Amazon Web Services You need a credential to use multicluster engine operator console to deploy and manage an Red Hat OpenShift Container Platform cluster on Amazon Web Services (AWS). Required access: Edit Note: This procedure must be done before you can create a cluster with multicluster engine operator. 1.6.1.1. Prerequisites You must have the following prerequisites before creating a credential: A deployed multicluster engine operator hub cluster Internet access for your multicluster engine operator hub cluster so it can create the Kubernetes cluster on Amazon Web Services (AWS) AWS login credentials, which include access key ID and secret access key. See Understanding and getting your security credentials . Account permissions that allow installing clusters on AWS. See Configuring an AWS account for instructions on how to configure an AWS account. 1.6.1.2. Managing a credential by using the console To create a credential from the multicluster engine operator console, complete the steps in the console. Start at the navigation menu. Click Credentials to choose from existing credential options. Tip: Create a namespace specifically to host your credentials, both for convenience and added security. You can optionally add a Base DNS domain for your credential. If you add the base DNS domain to the credential, it is automatically populated in the correct field when you create a cluster with this credential. See the following steps: Add your AWS access key ID for your AWS account. See Log in to AWS to find your ID. Provide the contents for your new AWS Secret Access Key . If you want to enable a proxy, enter the proxy information: HTTP proxy URL: The URL that should be used as a proxy for HTTP traffic. HTTPS proxy URL: The secure proxy URL that should be used for HTTPS traffic. If no value is provided, the same value as the HTTP Proxy URL is used for both HTTP and HTTPS . No proxy domains: A comma-separated list of domains that should bypass the proxy. Begin a domain name with a period . to include all of the subdomains that are in that domain. Add an asterisk * to bypass the proxy for all destinations. Additional trust bundle: One or more additional CA certificates that are required for proxying HTTPS connections. Enter your Red Hat OpenShift pull secret. See Download your Red Hat OpenShift pull secret to download your pull secret. Add your SSH private key and SSH public key , which allows you to connect to the cluster. You can use an existing key pair, or create a new one with key generation program. You can create a cluster that uses this credential by completing the steps in Creating a cluster on Amazon Web Services or Creating a cluster on Amazon Web Services GovCloud . You can edit your credential in the console. If the cluster was created by using this provider connection, then the <cluster-name>-aws-creds> secret from <cluster-namespace> will get updated with the new credentials. Note: Updating credentials does not work for cluster pool claimed clusters. When you are no longer managing a cluster that is using a credential, delete the credential to protect the information in the credential. Select Actions to delete in bulk, or select the options menu beside the credential that you want to delete. 1.6.1.2.1. Creating an S3 secret To create an Amazon Simple Storage Service (S3) secret, complete the following task from the console: Click Add credential > AWS > S3 Bucket . If you click For Hosted Control Plane , the name and namespace are provided. Enter information for the following fields that are provided: bucket name : Add the name of the S3 bucket. aws_access_key_id : Add your AWS access key ID for your AWS account. Log in to AWS to find your ID. aws_secret_access_key : Provide the contents for your new AWS Secret Access Key. Region : Enter your AWS region. 1.6.1.3. Creating an opaque secret by using the API To create an opaque secret for Amazon Web Services by using the API, apply YAML content in the YAML preview window that is similar to the following example: kind: Secret metadata: name: <managed-cluster-name>-aws-creds namespace: <managed-cluster-namespace> type: Opaque data: aws_access_key_id: USD(echo -n "USD{AWS_KEY}" | base64 -w0) aws_secret_access_key: USD(echo -n "USD{AWS_SECRET}" | base64 -w0) Notes: Opaque secrets are not visible in the console. Opaque secrets are created in the managed cluster namespace you chose. Hive uses the opaque secret to provision the cluster. When provisioning the cluster by using the Red Hat Advanced Cluster Management console, the credentials you previoulsy created are copied to the managed cluster namespace as the opaque secret. Add labels to your credentials to view your secret in the console. For example, the following AWS S3 Bucket oc label secret is appended with type=awss3 and credentials --from-file=... . : 1.6.1.4. Additional resources See Understanding and getting your security credentials . See Configuring an AWS account . Log in to AWS . Download your Red Hat OpenShift pull secret . See Generating a key pair for cluster node SSH access for more information about how to generate a key. See Creating a cluster on Amazon Web Services . See Creating a cluster on Amazon Web Services GovCloud . Return to Creating a credential for Amazon Web Services . 1.6.2. Creating a credential for Microsoft Azure You need a credential to use multicluster engine operator console to create and manage a Red Hat OpenShift Container Platform cluster on Microsoft Azure or on Microsoft Azure Government. Required access: Edit Note: This procedure is a prerequisite for creating a cluster with multicluster engine operator. 1.6.2.1. Prerequisites You must have the following prerequisites before creating a credential: A deployed multicluster engine operator hub cluster. Internet access for your multicluster engine operator hub cluster so that it can create the Kubernetes cluster on Azure. Azure login credentials, which include your Base Domain Resource Group and Azure Service Principal JSON. See Microsoft Azure portal to get your login credentials. Account permissions that allow installing clusters on Azure. See How to configure Cloud Services and Configuring an Azure account for more information. 1.6.2.2. Managing a credential by using the console To create a credential from the multicluster engine operator console, complete the steps in the console. Start at the navigation menu. Click Credentials to choose from existing credential options. Tip: Create a namespace specifically to host your credentials, both for convenience and added security. Optional: Add a Base DNS domain for your credential. If you add the base DNS domain to the credential, it is automatically populated in the correct field when you create a cluster with this credential. Select whether the environment for your cluster is AzurePublicCloud or AzureUSGovernmentCloud . The settings are different for the Azure Government environment, so ensure that this is set correctly. Add your Base domain resource group name for your Azure account. This entry is the resource name that you created with your Azure account. You can find your Base Domain Resource Group Name by selecting Home > DNS Zones in the Azure interface. See Create an Azure service principal with the Azure CLI to find your base domain resource group name. Provide the contents for your Client ID . This value is generated as the appId property when you create a service principal with the following command: Replace service_principal with the name of your service principal. Add your Client Secret . This value is generated as the password property when you create a service principal with the following command: Replace service_principal with the name of your service principal. Add your Subscription ID . This value is the id property in the output of the following command: Add your Tenant ID . This value is the tenantId property in the output of the following command: If you want to enable a proxy, enter the proxy information: HTTP proxy URL: The URL that should be used as a proxy for HTTP traffic. HTTPS proxy URL: The secure proxy URL that should be used for HTTPS traffic. If no value is provided, the same value as the HTTP Proxy URL is used for both HTTP and HTTPS . No proxy domains: A comma-separated list of domains that should bypass the proxy. Begin a domain name with a period . to include all of the subdomains that are in that domain. Add an asterisk * to bypass the proxy for all destinations. Additional trust bundle: One or more additional CA certificates that are required for proxying HTTPS connections. Enter your Red Hat OpenShift pull secret . See Download your Red Hat OpenShift pull secret to download your pull secret. Add your SSH private key and SSH public key to use to connect to the cluster. You can use an existing key pair, or create a new pair using a key generation program. You can create a cluster that uses this credential by completing the steps in Creating a cluster on Microsoft Azure . You can edit your credential in the console. When you are no longer managing a cluster that is using a credential, delete the credential to protect the information in the credential. Select Actions to delete in bulk, or select the options menu beside the credential that you want to delete. 1.6.2.3. Creating an opaque secret by using the API To create an opaque secret for Microsoft Azure by using the API instead of the console, apply YAML content in the YAML preview window that is similar to the following example: kind: Secret metadata: name: <managed-cluster-name>-azure-creds namespace: <managed-cluster-namespace> type: Opaque data: baseDomainResourceGroupName: USD(echo -n "USD{azure_resource_group_name}" | base64 -w0) osServicePrincipal.json: USD(base64 -w0 "USD{AZURE_CRED_JSON}") Notes: Opaque secrets are not visible in the console. Opaque secrets are created in the managed cluster namespace you chose. Hive uses the opaque secret to provision the cluster. When provisioning the cluster by using the Red Hat Advanced Cluster Management console, the credentials you previoulsy created are copied to the managed cluster namespace as the opaque secret. 1.6.2.4. Additional resources See Microsoft Azure portal . See How to configure Cloud Services . See Configuring an Azure account . See Create an Azure service principal with the Azure CLI to find your base domain resource group name. Download your Red Hat OpenShift pull secret . See Generating a key pair for cluster node SSH access for more information about how to generate a key. See Creating a cluster on Microsoft Azure . Return to Creating a credential for Microsoft Azure . 1.6.3. Creating a credential for Google Cloud Platform You need a credential to use multicluster engine operator console to create and manage a Red Hat OpenShift Container Platform cluster on Google Cloud Platform (GCP). Required access: Edit Note: This procedure is a prerequisite for creating a cluster with multicluster engine operator. 1.6.3.1. Prerequisites You must have the following prerequisites before creating a credential: A deployed multicluster engine operator hub cluster Internet access for your multicluster engine operator hub cluster so it can create the Kubernetes cluster on GCP GCP login credentials, which include user Google Cloud Platform Project ID and Google Cloud Platform service account JSON key. See Creating and managing projects . Account permissions that allow installing clusters on GCP. See Configuring a GCP project for instructions on how to configure an account. 1.6.3.2. Managing a credential by using the console To create a credential from the multicluster engine operator console, complete the steps in the console. Start at the navigation menu. Click Credentials to choose from existing credential options. Tip: Create a namespace specifically to host your credentials, for both convenience and security. You can optionally add a Base DNS domain for your credential. If you add the base DNS domain to the credential, it is automatically populated in the correct field when you create a cluster with this credential. See the following steps: Add your Google Cloud Platform project ID for your GCP account. See Log in to GCP to retrieve your settings. Add your Google Cloud Platform service account JSON key . See the Create service accounts documentation to create your service account JSON key. Follow the steps for the GCP console. Provide the contents for your new Google Cloud Platform service account JSON key . If you want to enable a proxy, enter the proxy information: HTTP proxy URL: The URL that should be used as a proxy for HTTP traffic. HTTPS proxy URL: The secure proxy URL that should be used for HTTPS traffic. If no value is provided, the same value as the HTTP Proxy URL is used for both HTTP and HTTPS . No proxy domains: A comma-separated list of domains that should bypass the proxy. Begin a domain name with a period . to include all of the subdomains that are in that domain. Add and asterisk * to bypass the proxy for all destinations. Additional trust bundle: One or more additional CA certificates that are required for proxying HTTPS connections. Enter your Red Hat OpenShift pull secret. See Download your Red Hat OpenShift pull secret to download your pull secret. Add your SSH private key and SSH public key so you can access the cluster. You can use an existing key pair, or create a new pair using a key generation program. You can use this connection when you create a cluster by completing the steps in Creating a cluster on Google Cloud Platform . You can edit your credential in the console. When you are no longer managing a cluster that is using a credential, delete the credential to protect the information in the credential. Select Actions to delete in bulk, or select the options menu beside the credential that you want to delete. 1.6.3.3. Creating an opaque secret by using the API To create an opaque secret for Google Cloud Platform by using the API instead of the console, apply YAML content in the YAML preview window that is similar to the following example: kind: Secret metadata: name: <managed-cluster-name>-gcp-creds namespace: <managed-cluster-namespace> type: Opaque data: osServiceAccount.json: USD(base64 -w0 "USD{GCP_CRED_JSON}") Notes: Opaque secrets are not visible in the console. Opaque secrets are created in the managed cluster namespace you chose. Hive uses the opaque secret to provision the cluster. When provisioning the cluster by using the Red Hat Advanced Cluster Management console, the credentials you previoulsy created are copied to the managed cluster namespace as the opaque secret. 1.6.3.4. Additional resources See Creating and managing projects . See Configuring a GCP project . Log in to GCP . See the Create service accounts to create your service account JSON key. Download your Red Hat OpenShift pull secret . See Generating a key pair for cluster node SSH access for more information about how to generate a key. See Creating a cluster on Google Cloud Platform . Return to Creating a credential for Google Cloud Platform . 1.6.4. Creating a credential for VMware vSphere You need a credential to use multicluster engine operator console to deploy and manage a Red Hat OpenShift Container Platform cluster on VMware vSphere. Required access: Edit 1.6.4.1. Prerequisites You must have the following prerequisites before you create a credential: You must create a credential for VMware vSphere before you can create a cluster with multicluster engine operator. A deployed hub cluster on a supported OpenShift Container Platform version. Internet access for your hub cluster so it can create the Kubernetes cluster on VMware vSphere. VMware vSphere login credentials and vCenter requirements configured for OpenShift Container Platform when using installer-provisioned infrastructure. See Installing a cluster on vSphere with customizations . These credentials include the following information: vCenter account privileges. Cluster resources. DHCP available. ESXi hosts have time synchronized (for example, NTP). 1.6.4.2. Managing a credential by using the console To create a credential from the multicluster engine operator console, complete the steps in the console. Start at the navigation menu. Click Credentials to choose from existing credential options. Tip: Create a namespace specifically to host your credentials, both for convenience and added security. You can optionally add a Base DNS domain for your credential. If you add the base DNS domain to the credential, it is automatically populated in the correct field when you create a cluster with this credential. See the following steps: Add your VMware vCenter server fully-qualified host name or IP address . The value must be defined in the vCenter server root CA certificate. If possible, use the fully-qualified host name. Add your VMware vCenter username . Add your VMware vCenter password . Add your VMware vCenter root CA certificate . You can download your certificate in the download.zip package with the certificate from your VMware vCenter server at: https://<vCenter_address>/certs/download.zip . Replace vCenter_address with the address to your vCenter server. Unpackage the download.zip . Use the certificates from the certs/<platform> directory that have a .0 extension. Tip: You can use the ls certs/<platform> command to list all of the available certificates for your platform. Replace <platform> with the abbreviation for your platform: lin , mac , or win . For example: certs/lin/3a343545.0 Best practice: Link together multiple certificates with a .0 extension by running the cat certs/lin/*.0 > ca.crt command. Add your VMware vSphere cluster name . Add your VMware vSphere datacenter . Add your VMware vSphere default datastore . Add your VMware vSphere disk type . Add your VMware vSphere folder . Add your VMware vSphere resource pool . For disconnected installations only: Complete the fields in the Configuration for disconnected installation subsection with the required information: Cluster OS image : This value contains the URL to the image to use for Red Hat OpenShift Container Platform cluster machines. Image content source : This value contains the disconnected registry path. The path contains the hostname, port, and repository path to all of the installation images for disconnected installations. Example: repository.com:5000/openshift/ocp-release . The path creates an image content source policy mapping in the install-config.yaml to the Red Hat OpenShift Container Platform release images. As an example, repository.com:5000 produces this imageContentSource content: - mirrors: - registry.example.com:5000/ocp4 source: quay.io/openshift-release-dev/ocp-release-nightly - mirrors: - registry.example.com:5000/ocp4 source: quay.io/openshift-release-dev/ocp-release - mirrors: - registry.example.com:5000/ocp4 source: quay.io/openshift-release-dev/ocp-v4.0-art-dev Additional trust bundle : This value provides the contents of the certificate file that is required to access the mirror registry. Note: If you are deploying managed clusters from a hub that is in a disconnected environment, and want them to be automatically imported post install, add an Image Content Source Policy to the install-config.yaml file by using the YAML editor. A sample entry is shown in the following example: - mirrors: - registry.example.com:5000/rhacm2 source: registry.redhat.io/rhacm2 If you want to enable a proxy, enter the proxy information: HTTP proxy URL: The URL that should be used as a proxy for HTTP traffic. HTTPS proxy URL: The secure proxy URL that should be used for HTTPS traffic. If no value is provided, the same value as the HTTP Proxy URL is used for both HTTP and HTTPS . No proxy domains: A comma-separated list of domains that should bypass the proxy. Begin a domain name with a period . to include all of the subdomains that are in that domain. Add and asterisk * to bypass the proxy for all destinations. Additional trust bundle: One or more additional CA certificates that are required for proxying HTTPS connections. Enter your Red Hat OpenShift pull secret. See Download your Red Hat OpenShift pull secret to download your pull secret. Add your SSH private key and SSH public key , which allows you to connect to the cluster. You can use an existing key pair, or create a new one with key generation program. You can create a cluster that uses this credential by completing the steps in Creating a cluster on VMware vSphere . You can edit your credential in the console. When you are no longer managing a cluster that is using a credential, delete the credential to protect the information in the credential. Select Actions to delete in bulk, or select the options menu beside the credential that you want to delete. 1.6.4.3. Creating an opaque secret by using the API To create an opaque secret for VMware vSphere by using the API instead of the console, apply YAML content in the YAML preview window that is similar to the following example: kind: Secret metadata: name: <managed-cluster-name>-vsphere-creds namespace: <managed-cluster-namespace> type: Opaque data: username: USD(echo -n "USD{VMW_USERNAME}" | base64 -w0) password.json: USD(base64 -w0 "USD{VMW_PASSWORD}") Notes: Opaque secrets are not visible in the console. Opaque secrets are created in the managed cluster namespace you chose. Hive uses the opaque secret to provision the cluster. When provisioning the cluster by using the Red Hat Advanced Cluster Management console, the credentials you previoulsy created are copied to the managed cluster namespace as the opaque secret. 1.6.4.4. Additional resources See Installing a cluster on vSphere with customizations . Download your Red Hat OpenShift pull secret . See Generating a key pair for cluster node SSH access for more information. See Creating a cluster on VMware vSphere . Return to Creating a credential for VMware vSphere . 1.6.5. Creating a credential for Red Hat OpenStack You need a credential to use multicluster engine operator console to deploy and manage a supported Red Hat OpenShift Container Platform cluster on Red Hat OpenStack Platform. Notes: You must create a credential for Red Hat OpenStack Platform before you can create a cluster with multicluster engine operator. 1.6.5.1. Prerequisites You must have the following prerequisites before you create a credential: A deployed hub cluster on a supported OpenShift Container Platform version. Internet access for your hub cluster so it can create the Kubernetes cluster on Red Hat OpenStack Platform. Red Hat OpenStack Platform login credentials and Red Hat OpenStack Platform requirements configured for OpenShift Container Platform when using installer-provisioned infrastructure. See Installing a cluster on OpenStack with customizations . Download or create a clouds.yaml file for accessing the CloudStack API. Within the clouds.yaml file: Determine the cloud auth section name to use. Add a line for the password , immediately following the username line. 1.6.5.2. Managing a credential by using the console To create a credential from the multicluster engine operator console, complete the steps in the console. Start at the navigation menu. Click Credentials to choose from existing credential options. To enhance security and convenience, you can create a namespace specifically to host your credentials. Optional: You can add a Base DNS domain for your credential. If you add the base DNS domain, it is automatically populated in the correct field when you create a cluster with this credential. Add your Red Hat OpenStack Platform clouds.yaml file contents. The contents of the clouds.yaml file, including the password, provide the required information for connecting to the Red Hat OpenStack Platform server. The file contents must include the password, which you add to a new line immediately after the username . Add your Red Hat OpenStack Platform cloud name. This entry is the name specified in the cloud section of the clouds.yaml to use for establishing communication to the Red Hat OpenStack Platform server. Optional : For configurations that use an internal certificate authority, enter your certificate in the Internal CA certificate field to automatically update your clouds.yaml with the certificate information. For disconnected installations only: Complete the fields in the Configuration for disconnected installation subsection with the required information: Cluster OS image : This value contains the URL to the image to use for Red Hat OpenShift Container Platform cluster machines. Image content sources : This value contains the disconnected registry path. The path contains the hostname, port, and repository path to all of the installation images for disconnected installations. Example: repository.com:5000/openshift/ocp-release . The path creates an image content source policy mapping in the install-config.yaml to the Red Hat OpenShift Container Platform release images. As an example, repository.com:5000 produces this imageContentSource content: - mirrors: - registry.example.com:5000/ocp4 source: quay.io/openshift-release-dev/ocp-release-nightly - mirrors: - registry.example.com:5000/ocp4 source: quay.io/openshift-release-dev/ocp-release - mirrors: - registry.example.com:5000/ocp4 source: quay.io/openshift-release-dev/ocp-v4.0-art-dev Additional trust bundle : This value provides the contents of the certificate file that is required to access the mirror registry. Note: If you are deploying managed clusters from a hub that is in a disconnected environment, and want them to be automatically imported post install, add an Image Content Source Policy to the install-config.yaml file by using the YAML editor. A sample entry is shown in the following example: - mirrors: - registry.example.com:5000/rhacm2 source: registry.redhat.io/rhacm2 If you want to enable a proxy, enter the proxy information: HTTP proxy URL: The URL that should be used as a proxy for HTTP traffic. HTTPS proxy URL: The secure proxy URL that should be used for HTTPS traffic. If no value is provided, the same value as the HTTP Proxy URL is used for both HTTP and HTTPS . No proxy domains: A comma-separated list of domains that should bypass the proxy. Begin a domain name with a period . to include all of the subdomains that are in that domain. Add an asterisk * to bypass the proxy for all destinations. Additional trust bundle: One or more additional CA certificates that are required for proxying HTTPS connections. Enter your Red Hat OpenShift pull secret. See Download your Red Hat OpenShift pull secret to download your pull secret. Add your SSH Private Key and SSH Public Key, which allows you to connect to the cluster. You can use an existing key pair, or create a new one with key generation program. Click Create . Review the new credential information, then click Add . When you add the credential, it is added to the list of credentials. You can create a cluster that uses this credential by completing the steps in Creating a cluster on Red Hat OpenStack Platform . You can edit your credential in the console. When you are no longer managing a cluster that is using a credential, delete the credential to protect the information in the credential. Select Actions to delete in bulk, or select the options menu beside the credential that you want to delete. 1.6.5.3. Creating an opaque secret by using the API To create an opaque secret for Red Hat OpenStack Platform by using the API instead of the console, apply YAML content in the YAML preview window that is similar to the following example: kind: Secret metadata: name: <managed-cluster-name>-osp-creds namespace: <managed-cluster-namespace> type: Opaque data: clouds.yaml: USD(base64 -w0 "USD{OSP_CRED_YAML}") cloud: USD(echo -n "openstack" | base64 -w0) Notes: Opaque secrets are not visible in the console. Opaque secrets are created in the managed cluster namespace you chose. Hive uses the opaque secret to provision the cluster. When provisioning the cluster by using the Red Hat Advanced Cluster Management console, the credentials you previoulsy created are copied to the managed cluster namespace as the opaque secret. 1.6.5.4. Additional resources See Installing a cluster on OpenStack with customizations . Download your Red Hat OpenShift pull secret . See Generating a key pair for cluster node SSH access for more information. See Creating a cluster on Red Hat OpenStack Platform . Return to Creating a credential for Red Hat OpenStack . 1.6.6. Creating a credential for Red Hat OpenShift Cluster Manager Add an OpenShift Cluster Manager credential so that you can discover clusters. Required access: Administrator 1.6.6.1. Prerequisites You need an API token for the OpenShift Cluster Manager account, or you can use a separate Service Account. To obtain an API token, see Downloading the OpenShift Cluster Manager API token . To use a Service Account, you must obtain the client ID and client secret when you are creating the Service Account. Enter the credentials to create the OpenShift Cluster Manager credential on your multicluster engine for Kubernetes operator. See Creating and managing a service account . 1.6.6.2. Adding a credential by using the console You need to add your credential to discover clusters. To create a credential from the multicluster engine operator console, complete the steps in the console: Log in to your cluster. Click Credentials > Credential type to choose from existing credential options. Create a namespace specifically to host your credentials, both for convenience and added security. Click Add credential . Select the Red Hat OpenShift Cluster Manager option. Select one of the authentication methods. Notes: When you are no longer managing a cluster that is using a credential, delete the credential to protect the information in the credential. If your credential is removed, or your OpenShift Cluster Manager API token expires or is revoked, then the associated discovered clusters are removed. 1.6.7. Creating a credential for Ansible Automation Platform You need a credential to use multicluster engine operator console to deploy and manage an Red Hat OpenShift Container Platform cluster that is using Red Hat Ansible Automation Platform. Required access: Edit Note: This procedure must be done before you can create an Automation template to enable automation on a cluster. 1.6.7.1. Prerequisites You must have the following prerequisites before creating a credential: A deployed multicluster engine operator hub cluster Internet access for your multicluster engine operator hub cluster Ansible login credentials, which includes Ansible Automation Platform hostname and OAuth token; see Credentials for Ansible Automation Platform . Account permissions that allow you to install hub clusters and work with Ansible. Learn more about Ansible users . 1.6.7.2. Managing a credential by using the console To create a credential from the multicluster engine operator console, complete the steps in the console. Start at the navigation menu. Click Credentials to choose from existing credential options. Tip: Create a namespace specifically to host your credentials, both for convenience and added security. The Ansible Token and host URL that you provide when you create your Ansible credential are automatically updated for the automations that use that credential when you edit the credential. The updates are copied to any automations that use that Ansible credential, including those related to cluster lifecycle, governance, and application management automations. This ensures that the automations continue to run after the credential is updated. You can edit your credential in the console. Ansible credentials are automatically updated in your automation that use that credential when you update them in the credential. You can create an Ansible Job that uses this credential by completing the steps in Configuring Ansible Automation Platform tasks to run on managed clusters . When you are no longer managing a cluster that is using a credential, delete the credential to protect the information in the credential. Select Actions to delete in bulk, or select the options menu beside the credential that you want to delete. 1.6.8. Creating a credential for an on-premises environment You need a credential to use the console to deploy and manage a Red Hat OpenShift Container Platform cluster in an on-premises environment. The credential specifies the connections that are used for the cluster. Required access: Edit Prerequisites Managing a credential by using the console 1.6.8.1. Prerequisites You need the following prerequisites before creating a credential: A hub cluster that is deployed. Internet access for your hub cluster so it can create the Kubernetes cluster on your infrastructure environment. For a disconnected environment, you must have a configured mirror registry where you can copy the release images for your cluster creation. See Disconnected installation mirroring in the OpenShift Container Platform documentation for more information. Account permissions that support installing clusters on the on-premises environment. 1.6.8.2. Managing a credential by using the console To create a credential from the console, complete the steps in the console. Start at the navigation menu. Click Credentials to choose from existing credential options. Tip: Create a namespace specifically to host your credentials, both for convenience and added security. Select Host inventory for your credential type. You can optionally add a Base DNS domain for your credential. If you add the base DNS domain to the credential, it is automatically populated in the correct field when you create a cluster with this credential. If you do not add the DNS domain, you can add it when you create your cluster. Enter your Red Hat OpenShift pull secret . This pull secret is automatically entered when you create a cluster and specify this credential. You can download your pull secret from Pull secret . See Using image pull secrets for more information about pull secrets. Enter your SSH public key . This SSH public key is also automatically entered when you create a cluster and specify this credential. Select Add to create your credential. You can create a cluster that uses this credential by completing the steps in Creating a cluster in an on-premises environment . When you are no longer managing a cluster that is using a credential, delete the credential to protect the information in the credential. Select Actions to delete in bulk, or select the options menu beside the credential that you want to delete. 1.7. Cluster lifecycle introduction The multicluster engine operator is the cluster lifecycle operator that provides cluster management capabilities for OpenShift Container Platform and Red Hat Advanced Cluster Management hub clusters. The multicluster engine operator is a software operator that enhances cluster fleet management and supports OpenShift Container Platform cluster lifecycle management across clouds and data centers. You can use multicluster engine operator with or without Red Hat Advanced Cluster Management. Red Hat Advanced Cluster Management also installs multicluster engine operator automatically and offers further multicluster capabilities. See the following documentation: Cluster lifecycle architecture Managing credentials overview Release images Creating clusters Cluster import Accessing your cluster Scaling managed clusters Hibernating a created cluster Upgrading your cluster Enabling cluster proxy add-ons Configuring Ansible Automation Platform tasks to run on managed clusters ClusterClaims ManagedClusterSets Placement Managing cluster pools (Technology Preview) Enabling ManagedServiceAccount Cluster lifecycle advanced configuration Removing a cluster from management 1.7.1. Cluster lifecycle architecture Cluster lifecycle requires two types of clusters: hub clusters and managed clusters . The hub cluster is the OpenShift Container Platform (or Red Hat Advanced Cluster Management) main cluster with the multicluster engine operator automatically installed. You can create, manage, and monitor other Kubernetes clusters with the hub cluster. You can create clusters by using the hub cluster, while you can also import existing clusters to be managed by the hub cluster. When you create a managed cluster, the cluster is created using the Red Hat OpenShift Container Platform cluster installer with the Hive resource. You can find more information about the process of installing clusters with the OpenShift Container Platform installer by reading Installing and configuring OpenShift Container Platform clusters in the OpenShift Container Platform documentation. The following diagram shows the components that are installed with the multicluster engine for Kubernetes operator for cluster management: The components of the cluster lifecycle management architecture include the following items: 1.7.1.1. Hub cluster The managed cluster import controller deploys the klusterlet operator to the managed clusters. The Hive controller provisions the clusters that you create by using the multicluster engine for Kubernetes operator. The Hive Controller also destroys managed clusters that were created by the multicluster engine for Kubernetes operator. The cluster curator controller creates the Ansible jobs as the pre-hook or post-hook to configure the cluster infrastructure environment when creating or upgrading managed clusters. When a managed cluster add-on is enabled on the hub cluster, its add-on hub controller is deployed on the hub cluster. The add-on hub controller deploys the add-on agent to the managed clusters. 1.7.1.2. Managed cluster The klusterlet operator deploys the registration and work controllers on the managed cluster. The Registration Agent registers the managed cluster and the managed cluster add-ons with the hub cluster. The Registration Agent also maintains the status of the managed cluster and the managed cluster add-ons. The following permissions are automatically created within the Clusterrole to allow the managed cluster to access the hub cluster: Allows the agent to get or update its owned cluster that the hub cluster manages Allows the agent to update the status of its owned cluster that the hub cluster manages Allows the agent to rotate its certificate Allows the agent to get or update the coordination.k8s.io lease Allows the agent to get its managed cluster add-ons Allows the agent to update the status of its managed cluster add-ons The work agent applies the Add-on Agent to the managed cluster. The permission to allow the managed cluster to access the hub cluster is automatically created within the Clusterrole and allows the agent to send events to the hub cluster. To continue adding and managing clusters, see the Cluster lifecycle introduction . 1.7.2. Release images When you build your cluster, use the version of Red Hat OpenShift Container Platform that the release image specifies. By default, OpenShift Container Platform uses the clusterImageSets resources to get the list of supported release images. Continue reading to learn more about release images: Specifying release images Maintaining a custom list of release images while connected Maintaining a custom list of release images while disconnected Synchronizing available release images 1.7.2.1. Specifying release images When you create a cluster on a provider by using multicluster engine for Kubernetes operator, specify a release image to use for your new cluster. To specify a release image, see the following topics: Locating ClusterImageSets Configuring ClusterImageSets Creating a release image to deploy a cluster on a different architecture 1.7.2.1.1. Locating ClusterImageSets The YAML files referencing the release images are maintained in the acm-hive-openshift-releases GitHub repository. The files are used to create the list of the available release images in the console. This includes the latest fast channel images from OpenShift Container Platform. The console only displays the latest release images for the three latest versions of OpenShift Container Platform. For example, you might see the following release image displayed in the console options: quay.io/openshift-release-dev/ocp-release:4.15.1-x86_64 The console displays the latest versions to help you create a cluster with the latest release images. If you need to create a cluster that is a specific version, older release image versions are also available. Note: You can only select images with the visible: 'true' label when creating clusters in the console. An example of this label in a ClusterImageSet resource is provided in the following content. Replace 4.x.1 with the current version of the product: apiVersion: hive.openshift.io/v1 kind: ClusterImageSet metadata: labels: channel: fast visible: 'true' name: img4.x.1-x86-64-appsub spec: releaseImage: quay.io/openshift-release-dev/ocp-release:4.x.1-x86_64 Additional release images are stored, but are not visible in the console. To view all of the available release images, run the following command: The repository has the clusterImageSets directory, which is the directory that you use when working with the release images. The clusterImageSets directory has the following directories: Fast: Contains files that reference the latest versions of the release images for each supported OpenShift Container Platform version. The release images in this folder are tested, verified, and supported. Releases: Contains files that reference all of the release images for each OpenShift Container Platform version (stable, fast, and candidate channels) Note: These releases have not all been tested and determined to be stable. Stable: Contains files that reference the latest two stable versions of the release images for each supported OpenShift Container Platform version.. Note: By default, the current list of release images updates one time every hour. After upgrading the product, it might take up to one hour for the list to reflect the recommended release image versions for the new version of the product. 1.7.2.1.2. Configuring ClusterImageSets You can configure your ClusterImageSets with the following options: Option 1: To create a cluster in the console, specify the image reference for the specific ClusterImageSet that you want to us. Each new entry you specify persists and is available for all future cluster provisions See the following example entry: Option 2: Manually create and apply a ClusterImageSets YAML file from the acm-hive-openshift-releases GitHub repository. Option 3: To enable automatic updates of ClusterImageSets from a forked GitHub repository, follow the README.md in the cluster-image-set-controller GitHub repository. 1.7.2.1.3. Creating a release image to deploy a cluster on a different architecture You can create a cluster on an architecture that is different from the architecture of the hub cluster by manually creating a release image that has the files for both architectures. For example, you might need to create an x86_64 cluster from a hub cluster that is running on the ppc64le , aarch64 , or s390x architecture. If you create the release image with both sets of files, the cluster creation succeeds because the new release image enables the OpenShift Container Platform release registry to provide a multi-architecture image manifest. OpenShift Container Platform supports multiple architectures by default. You can use the following clusterImageSet to provision a cluster. Replace 4.x.0 with the current supported version: apiVersion: hive.openshift.io/v1 kind: ClusterImageSet metadata: labels: channel: fast visible: 'true' name: img4.x.0-multi-appsub spec: releaseImage: quay.io/openshift-release-dev/ocp-release:4.x.0-multi To create the release image for OpenShift Container Platform images that do not support multiple architectures, complete steps similar to the following example for your architecture type: From the OpenShift Container Platform release registry , create a manifest list that includes x86_64 , s390x , aarch64 , and ppc64le release images. Pull the manifest lists for both architectures in your environment from the Quay repository by running the following example commands. Replace 4.x.1 with the current version of the product: Log in to your private repository where you maintain your images by running the following command. Replace <private-repo> with the path to your repository: Add the release image manifest to your private repository by running the following commands that apply to your environment. Replace 4.x.1 with the current version of the product. Replace <private-repo> with the path to your repository: Create a manifest for the new information by running the following command: Add references to both release images to the manifest list by running the following commands. Replace 4.x.1 with the current version of the product. Replace <private-repo> with the path to your repository: Merge the list in your manifest list with the existing manifest by running the following command. Replace <private-repo> with the path to your repository. Replace 4.x.1 with the current version: On the hub cluster, create a release image that references the manifest in your repository. Create a YAML file that contains information that is similar to the following example. Replace <private-repo> with the path to your repository. Replace 4.x.1 with the current version: apiVersion: hive.openshift.io/v1 kind: ClusterImageSet metadata: labels: channel: fast visible: "true" name: img4.x.1-appsub spec: releaseImage: <private-repo>/ocp-release:4.x.1 Run the following command on your hub cluster to apply the changes. Replace <file-name> with the name of the YAML file that you created in the step: Select the new release image when you create your OpenShift Container Platform cluster. If you deploy the managed cluster by using the Red Hat Advanced Cluster Management console, specify the architecture for the managed cluster in the Architecture field during the cluster creation process. The creation process uses the merged release images to create the cluster. 1.7.2.1.4. Additional resources See the acm-hive-openshift-releases GitHub repository for the YAML files that reference the release images. See the cluster-image-set-controller GitHub repository to learn how to enable enable automatic updates of ClusterImageSets resources from a forked GitHub repository. 1.7.2.2. Maintaining a custom list of release images when connected You might want to use the same release image for all of your clusters. To simplify, you can create your own custom list of release images that are available when creating a cluster. Complete the following steps to manage your available release images: Fork the acm-hive-openshift-releases GitHub . Add the YAML files for the images that you want available when you create a cluster. Add the images to the ./clusterImageSets/stable/ or ./clusterImageSets/fast/ directory by using the Git console or the terminal. Create a ConfigMap in the multicluster-engine namespace named cluster-image-set-git-repo . See the following example, but replace 2.x with 2.7: apiVersion: v1 kind: ConfigMap metadata: name: cluster-image-set-git-repo namespace: multicluster-engine data: gitRepoUrl: <forked acm-hive-openshift-releases repository URL> gitRepoBranch: backplane-<2.x> gitRepoPath: clusterImageSets channel: <fast or stable> You can retrieve the available YAML files from the main repository by merging changes in to your forked repository with the following procedure: Commit and merge your changes to your forked repository. To synchronize your list of fast release images after you clone the acm-hive-openshift-releases repository, update the value of channel field in the cluster-image-set-git-repo ConfigMap to fast . To synchronize and display the stable release images, update the value of channel field in the cluster-image-set-git-repo ConfigMap to stable . After updating the ConfigMap , the list of available stable release images updates with the currently available images in about one minute. You can use the following commands to list what is available and remove the defaults. Replace <clusterImageSet_NAME> with the correct name: View the list of currently available release images in the console when you are creating a cluster. For information regarding other fields available through the ConfigMap , view the cluster-image-set-controller GitHub repository README . 1.7.2.3. Maintaining a custom list of release images while disconnected In some cases, you need to maintain a custom list of release images when the hub cluster has no Internet connection. You can create your own custom list of release images that are available when creating a cluster. Complete the following steps to manage your available release images while disconnected: When you are on a connected system, go to the acm-hive-openshift-releases GitHub repository to access the available cluster image sets. Copy the clusterImageSets directory to a system that can access the disconnected multicluster engine operator cluster. Add the mapping between the managed cluster and the disconnected repository with your cluster image sets by completing the following steps that fits your managed cluster: For an OpenShift Container Platform managed cluster, see Configuring image registry repository mirroring for information about using your ImageContentSourcePolicy object to complete the mapping. For a managed cluster that is not an OpenShift Container Platform cluster, use the ManageClusterImageRegistry custom resource definition to override the location of the image sets. See Specifying registry images on managed clusters for import for information about how to override the cluster for the mapping. Add the YAML files for the images that you want available when you create a cluster by using the console or CLI to manually add the clusterImageSet YAML content. Modify the clusterImageSet YAML files for the remaining OpenShift Container Platform release images to reference the correct offline repository where you store the images. Your updates resemble the following example where spec.releaseImage uses your offline image registry of the release image, and the release image is referenced by digest: apiVersion: hive.openshift.io/v1 kind: ClusterImageSet metadata: labels: channel: fast name: img<4.x.x>-x86-64-appsub spec: releaseImage: IMAGE_REGISTRY_IPADDRESS_or__DNSNAME/REPO_PATH/ocp-release@sha256:073a4e46289be25e2a05f5264c8f1d697410db66b960c9ceeddebd1c61e58717 Ensure that the images are loaded in the offline image registry that is referenced in the YAML file. Obtain the image digest by running the following command: oc adm release info <tagged_openshift_release_image> | grep "Pull From" Replace <tagged_openshift_release_image> with the tagged image for the supported OpenShift Container Platform version. See the following example output: Pull From: quay.io/openshift-release-dev/ocp-release@sha256:69d1292f64a2b67227c5592c1a7d499c7d00376e498634ff8e1946bc9ccdddfe To learn more about the image tag and digest, see Referencing images in imagestreams . Create each of the clusterImageSets by entering the following command for each YAML file: Replace clusterImageSet_FILE with the name of the cluster image set file. For example: After running this command for each resource you want to add, the list of available release images are available. Alternately you can paste the image URL directly in the create cluster console. Adding the image URL creates new clusterImageSets if they do not exist. View the list of currently available release images in the console when you are creating a cluster. 1.7.2.4. Synchronizing available release images If you have the Red Hat Advanced Cluster Management hub cluster, which uses the MultiClusterHub operator to manage, upgrade, and install hub cluster components, you can synchronize the list of release images to ensure that you can select the latest available versions. Release images are available in the acm-hive-openshift-releases repository and are updated frequently. 1.7.2.4.1. Stability levels There are three levels of stability of the release images, as displayed in the following table: Table 1.4. Stability levels of release images Category Description candidate The most current images, which are not tested and might have some bugs. fast Images that are partially tested, but likely less stable than a stable version. stable These fully-tested images are confirmed to install and build clusters correctly. 1.7.2.4.2. Refreshing the release images list Complete the following steps to refresh and synchronize the list of images by using a Linux or Mac operating system: If the installer-managed acm-hive-openshift-releases subscription is enabled, disable the subscription by setting the value of disableUpdateClusterImageSets to true in the MultiClusterHub resource. Clone the acm-hive-openshift-releases GitHub repository. Remove the subscription by running the following command: oc delete -f subscribe/subscription-fast To synchronize and display the candidate release images, run the following command by using a Linux or Mac operating system: make subscribe-candidate After about one minute, the latest list of candidate release images is available. To synchronize and display the fast release images, run the following command: make subscribe-fast After about one minute, the latest list of fast release images is available. Connect to the stable release images and synchronize your Red Hat Advanced Cluster Management hub cluster. Run the following command using a Linux or Mac operating system: make subscribe-stable After about one minute, the list of available candidate , fast , and stable release images updates with the currently available images. View the list of currently available release images in the Red Hat Advanced Cluster Management console when you are creating a cluster. Unsubscribe from any of these channels to stop viewing the updates by running the following command: oc delete -f subscribe/subscription-fast 1.7.3. Creating clusters Learn how to create Red Hat OpenShift Container Platform clusters across cloud providers with multicluster engine operator. multicluster engine operator uses the Hive operator that is provided with OpenShift Container Platform to provision clusters for all providers except the on-premises clusters and hosted control planes. When provisioning the on-premises clusters, multicluster engine operator uses the central infrastructure management and Assisted Installer function that are provided with OpenShift Container Platform. The hosted clusters for hosted control planes are provisioned by using the HyperShift operator. Configuring additional manifests during cluster creation Creating a cluster on Amazon Web Services Creating a cluster on Amazon Web Services GovCloud Creating a cluster on Microsoft Azure Creating a cluster on Google Cloud Platform Creating a cluster on VMware vSphere Creating a cluster on Red Hat OpenStack Platform Creating a cluster in an on-premises environment Creating a cluster in a proxy environment Configuring AgentClusterInstall proxy 1.7.3.1. Creating a cluster with the CLI The multicluster engine for Kubernetes operator uses internal Hive components to create Red Hat OpenShift Container Platform clusters. See the following information to learn how to create clusters. Prerequisites Create a cluster with ClusterDeployment Create a cluster with cluster pool 1.7.3.1.1. Prerequisites Before you create a cluster, you must clone the clusterImageSets repository and apply it to your hub cluster. Complete the following steps: Run the following command to clone, but replace 2.x with with your version of multicluster engine operator: Run the following command to apply it to your hub cluster: Select the OpenShift Container Platform release images when you create a cluster. Note: If you use the Nutanix platform, be sure to use x86_64 architecture for the releaseImage in the ClusterImageSet resource and set the visible label value to 'true' . See the following example: apiVersion: hive.openshift.io/v1 kind: ClusterImageSet metadata: labels: channel: stable visible: 'true' name: img4.x.47-x86-64-appsub spec: releaseImage: quay.io/openshift-release-dev/ocp-release:4.x.47-x86_64 Review the hub cluster KubeAPIServer certificate verification strategy to make sure that the default UseAutoDetectedCABundle strategy works. If you need to manually change the strategy, see Configuring the hub cluster KubeAPIServer verification strategy . 1.7.3.1.2. Create a cluster with ClusterDeployment A ClusterDeployment is a Hive custom resource that is used to control the lifecycle of a cluster. Follow the Using Hive documentation to create the ClusterDeployment custom resource and create an individual cluster. 1.7.3.1.3. Create a cluster with ClusterPool A ClusterPool is also a Hive custom resource that is used to create multiple clusters. Follow the Cluster Pools documentation to create a cluster with the Hive ClusterPool API. 1.7.3.2. Configuring additional manifests during cluster creation You can configure additional Kubernetes resource manifests during the installation process of creating your cluster. This can help if you need to configure additional manifests for scenarios such as configuring networking or setting up a load balancer. 1.7.3.2.1. Prerequisites Add a reference to the ClusterDeployment resource that specifies a config map resource that contains the additional resource manifests. Note: The ClusterDeployment resource and the config map must be in the same namespace. 1.7.3.2.2. Configuring additional manifests during cluster creation by using examples If you want to configure additional manifests by using a config map with resource manifests, complete the following steps: Create a YAML file and add the following example content: kind: ConfigMap apiVersion: v1 metadata: name: <my-baremetal-cluster-install-manifests> namespace: <mynamespace> data: 99_metal3-config.yaml: | kind: ConfigMap apiVersion: v1 metadata: name: metal3-config namespace: openshift-machine-api data: http_port: "6180" provisioning_interface: "enp1s0" provisioning_ip: "172.00.0.3/24" dhcp_range: "172.00.0.10,172.00.0.100" deploy_kernel_url: "http://172.00.0.3:6180/images/ironic-python-agent.kernel" deploy_ramdisk_url: "http://172.00.0.3:6180/images/ironic-python-agent.initramfs" ironic_endpoint: "http://172.00.0.3:6385/v1/" ironic_inspector_endpoint: "http://172.00.0.3:5150/v1/" cache_url: "http://192.168.111.1/images" rhcos_image_url: "https://releases-art-rhcos.svc.ci.openshift.org/art/storage/releases/rhcos-4.3/43.81.201911192044.0/x86_64/rhcos-43.81.201911192044.0-openstack.x86_64.qcow2.gz" Note: The example ConfigMap contains a manifest with another ConfigMap resource. The resource manifest ConfigMap can contain multiple keys with resource configurations added in the following pattern, data.<resource_name>\.yaml . Apply the file by running the following command: oc apply -f <filename>.yaml If you want to configure additional manifests by using a ClusterDeployment by referencing a resource manifest ConfigMap , complete the following steps: Create a YAML file and add the following example content. The resource manifest ConfigMap is referenced in spec.provisioning.manifestsConfigMapRef : apiVersion: hive.openshift.io/v1 kind: ClusterDeployment metadata: name: <my-baremetal-cluster> namespace: <mynamespace> annotations: hive.openshift.io/try-install-once: "true" spec: baseDomain: test.example.com clusterName: <my-baremetal-cluster> controlPlaneConfig: servingCertificates: {} platform: baremetal: libvirtSSHPrivateKeySecretRef: name: provisioning-host-ssh-private-key provisioning: installConfigSecretRef: name: <my-baremetal-cluster-install-config> sshPrivateKeySecretRef: name: <my-baremetal-hosts-ssh-private-key> manifestsConfigMapRef: name: <my-baremetal-cluster-install-manifests> imageSetRef: name: <my-clusterimageset> sshKnownHosts: - "10.1.8.90 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXvVVVKUYVkuyvkuygkuyTCYTytfkufTYAAAAIbmlzdHAyNTYAAABBBKWjJRzeUVuZs4yxSy4eu45xiANFIIbwE3e1aPzGD58x/NX7Yf+S8eFKq4RrsfSaK2hVJyJjvVIhUsU9z2sBJP8=" pullSecretRef: name: <my-baremetal-cluster-pull-secret> Apply the file by running the following command: oc apply -f <filename>.yaml 1.7.3.3. Creating a cluster on Amazon Web Services You can use the multicluster engine operator console to create a Red Hat OpenShift Container Platform cluster on Amazon Web Services (AWS). When you create a cluster, the creation process uses the OpenShift Container Platform installer with the Hive resource. If you have questions about cluster creation after completing this procedure, see Installing on AWS in the OpenShift Container Platform documentation for more information about the process. Prerequisites Creating your AWS cluster Creating your cluster with the console 1.7.3.3.1. Prerequisites See the following prerequisites before creating a cluster on AWS: You must have a deployed hub cluster. You need an AWS credential. See Creating a credential for Amazon Web Services for more information. You need a configured domain in AWS. See Configuring an AWS account for instructions on how to configure a domain. You must have Amazon Web Services (AWS) login credentials, which include user name, password, access key ID, and secret access key. See Understanding and Getting Your Security Credentials . You must have an OpenShift Container Platform image pull secret. See Using image pull secrets . Note: If you change your cloud provider access key on the cloud provider, you also need to manually update the corresponding credential for the cloud provider on the console. This is required when your credentials expire on the cloud provider where the managed cluster is hosted and you try to delete the managed cluster. 1.7.3.3.2. Creating your AWS cluster See the following important information about creating an AWS cluster: When you review your information and optionally customize it before creating the cluster, you can select YAML: On to view the install-config.yaml file content in the panel. You can edit the YAML file with your custom settings, if you have any updates. When you create a cluster, the controller creates a namespace for the cluster and the resources. Ensure that you include only resources for that cluster instance in that namespace. Destroying the cluster deletes the namespace and all of the resources in it. If you want to add your cluster to an existing cluster set, you must have the correct permissions on the cluster set to add it. If you do not have cluster-admin privileges when you are creating the cluster, you must select a cluster set on which you have clusterset-admin permissions. If you do not have the correct permissions on the specified cluster set, the cluster creation fails. Contact your cluster administrator to provide you with clusterset-admin permissions to a cluster set if you do not have any cluster set options to select. Every managed cluster must be associated with a managed cluster set. If you do not assign the managed cluster to a ManagedClusterSet , it is automatically added to the default managed cluster set. If there is already a base DNS domain that is associated with the selected credential that you configured with your AWS account, that value is populated in the field. You can change the value by overwriting it. This name is used in the hostname of the cluster. The release image identifies the version of the OpenShift Container Platform image that is used to create the cluster. Select the image from the list of images that are available. If the image that you want to use is not available, you can enter the URL to the image that you want to use. The node pools include the control plane pool and the worker pools. The control plane nodes share the management of the cluster activity. The information includes the following fields: Region: Specify the region where you want the node pool. CPU architecture: If the architecture type of the managed cluster is not the same as the architecture of your hub cluster, enter a value for the instruction set architecture of the machines in the pool. Valid values are amd64 , ppc64le , s390x , and arm64 . Zones: Specify where you want to run your control plane pools. You can select multiple zones within the region for a more distributed group of control plane nodes. A closer zone might provide faster performance, but a more distant zone might be more distributed. Instance type: Specify the instance type for your control plane node. You can change the type and size of your instance after it is created. Root storage: Specify the amount of root storage to allocate for the cluster. You can create zero or more worker nodes in a worker pool to run the container workloads for the cluster. This can be in a single worker pool, or distributed across multiple worker pools. If zero worker nodes are specified, the control plane nodes also function as worker nodes. The optional information includes the following fields: Zones: Specify where you want to run your worker pools. You can select multiple zones within the region for a more distributed group of nodes. A closer zone might provide faster performance, but a more distant zone might be more distributed. Instance type: Specify the instance type of your worker pools. You can change the type and size of your instance after it is created. Node count: Specify the node count of your worker pool. This setting is required when you define a worker pool. Root storage: Specify the amount of root storage allocated for your worker pool. This setting is required when you define a worker pool. Networking details are required for your cluster, and multiple networks are required for using IPv6. You can add an additional network by clicking Add network . Proxy information that is provided in the credential is automatically added to the proxy fields. You can use the information as it is, overwrite it, or add the information if you want to enable a proxy. The following list contains the required information for creating a proxy: HTTP proxy: Specify the URL that should be used as a proxy for HTTP traffic. HTTPS proxy: Specify the secure proxy URL that should be used for HTTPS traffic. If no value is provided, the same value as the HTTP Proxy URL is used for both HTTP and HTTPS . No proxy sites: A comma-separated list of sites that should bypass the proxy. Begin a domain name with a period . to include all of the subdomains that are in that domain. Add an asterisk * to bypass the proxy for all destinations. Additional trust bundle: One or more additional CA certificates that are required for proxying HTTPS connections. 1.7.3.3.3. Creating your cluster with the console To create a new cluster, see the following procedure. If you have an existing cluster that you want to import instead, see Cluster import . Note: You do not have to run the oc command that is provided with the cluster details to import the cluster. When you create the cluster, it is automatically configured under the management of multicluster engine operator. Navigate to Infrastructure > Clusters . On the Clusters page. Click Cluster > Create cluster and complete the steps in the console. Optional: Select YAML: On to view content updates as you enter the information in the console. If you need to create a credential, see Creating a credential for Amazon Web Services for more information. The name of the cluster is used in the hostname of the cluster. If you are using Red Hat Advanced Cluster Management for Kubernetes and want to configure your managed cluster klusterlet to run on specific nodes, see Optional: Configuring the klusterlet to run on specific nodes for the required steps. 1.7.3.3.4. Additional resources The AWS private configuration information is used when you are creating an AWS GovCloud cluster. See Creating a cluster on Amazon Web Services GovCloud for information about creating a cluster in that environment. See Configuring an AWS account for more information. See Release images for more information about release images. Find more information about supported instant types by visiting your cloud provider sites, such as AWS General purpose instances . 1.7.3.4. Creating a cluster on Amazon Web Services GovCloud You can use the console to create a Red Hat OpenShift Container Platform cluster on Amazon Web Services (AWS) or on AWS GovCloud. This procedure explains how to create a cluster on AWS GovCloud. See Creating a cluster on Amazon Web Services for the instructions for creating a cluster on AWS. AWS GovCloud provides cloud services that meet additional requirements that are necessary to store government documents on the cloud. When you create a cluster on AWS GovCloud, you must complete additional steps to prepare your environment. When you create a cluster, the creation process uses the OpenShift Container Platform installer with the Hive resource. If you have questions about cluster creation after completing this procedure, see Installing a cluster on AWS into a government region in the OpenShift Container Platform documentation for more information about the process. The following sections provide the steps for creating a cluster on AWS GovCloud: Prerequisites Configure Hive to deploy on AWS GovCloud Creating your cluster with the console 1.7.3.4.1. Prerequisites You must have the following prerequisites before creating an AWS GovCloud cluster: You must have AWS login credentials, which include user name, password, access key ID, and secret access key. See Understanding and Getting Your Security Credentials . You need an AWS credential. See Creating a credential for Amazon Web Services for more information. You need a configured domain in AWS. See Configuring an AWS account for instructions on how to configure a domain. You must have an OpenShift Container Platform image pull secret. See Using image pull secrets . You must have an Amazon Virtual Private Cloud (VPC) with an existing Red Hat OpenShift Container Platform cluster for the hub cluster. This VPC must be different from the VPCs that are used for the managed cluster resources or the managed cluster service endpoints. You need a VPC where the managed cluster resources are deployed. This cannot be the same as the VPCs that are used for the hub cluster or the managed cluster service endpoints. You need one or more VPCs that provide the managed cluster service endpoints. This cannot be the same as the VPCs that are used for the hub cluster or the managed cluster resources. Ensure that the IP addresses of the VPCs that are specified by Classless Inter-Domain Routing (CIDR) do not overlap. You need a HiveConfig custom resource that references a credential within the Hive namespace. This custom resource must have access to create resources on the VPC that you created for the managed cluster service endpoints. Note: If you change your cloud provider access key on the cloud provider, you also need to manually update the corresponding credential for the cloud provider on the multicluster engine operator console. This is required when your credentials expire on the cloud provider where the managed cluster is hosted and you try to delete the managed cluster. 1.7.3.4.2. Configure Hive to deploy on AWS GovCloud While creating a cluster on AWS GovCloud is almost identical to creating a cluster on standard AWS, you have to complete some additional steps to prepare an AWS PrivateLink for the cluster on AWS GovCloud. 1.7.3.4.2.1. Create the VPCs for resources and endpoints As listed in the prerequisites, two VPCs are required in addition to the VPC that contains the hub cluster. See Create a VPC in the Amazon Web Services documentation for specific steps for creating a VPC. Create a VPC for the managed cluster with private subnets. Create one or more VPCs for the managed cluster service endpoints with private subnets. Each VPC in a region has a limit of 255 VPC endpoints, so you need multiple VPCs to support more than 255 clusters in that region. For each VPC, create subnets in all of the supported availability zones of the region. Each subnet must have at least 255 usable IP addresses because of the controller requirements. The following example shows how you might structure subnets for VPCs that have 6 availability zones in the us-gov-east-1 region: Ensure that all of the hub environments (hub cluster VPCs) have network connectivity to the VPCs that you created for VPC endpoints that use peering, transit gateways, and that all DNS settings are enabled. Collect a list of VPCs that are needed to resolve the DNS setup for the AWS PrivateLink, which is required for the AWS GovCloud connectivity. This includes at least the VPC of the multicluster engine operator instance that you are configuring, and can include the list of all of the VPCs where various Hive controllers exist. 1.7.3.4.2.2. Configure the security groups for the VPC endpoints Each VPC endpoint in AWS has a security group attached to control access to the endpoint. When Hive creates a VPC endpoint, it does not specify a security group. The default security group of the VPC is attached to the VPC endpoint. The default security group of the VPC must have rules to allow traffic where VPC endpoints are created from the Hive installer pods. See Control access to VPC endpoints using endpoint policies in the AWS documentation for details. For example, if Hive is running in hive-vpc(10.1.0.0/16) , there must be a rule in the default security group of the VPC where the VPC endpoint is created that allows ingress from 10.1.0.0/16 . 1.7.3.4.2.3. Set permissions for AWS PrivateLink You need multiple credentials to configure the AWS PrivateLink. The required permissions for these credentials depend on the type of credential. The credentials for ClusterDeployment require the following permissions: The credentials for HiveConfig for endpoint VPCs account .spec.awsPrivateLink.credentialsSecretRef require the following permissions: The credentials specified in the HiveConfig custom resource for associating VPCs to the private hosted zone ( .spec.awsPrivateLink.associatedVPCs[USDidx].credentialsSecretRef ). The account where the VPC is located requires the following permissions: Ensure that there is a credential secret within the Hive namespace on the hub cluster. The HiveConfig custom resource needs to reference a credential within the Hive namespace that has permissions to create resources in a specific provided VPC. If the credential that you are using to provision an AWS cluster in AWS GovCloud is already in the Hive namespace, then you do not need to create another one. If the credential that you are using to provision an AWS cluster in AWS GovCloud is not already in the Hive namespace, you can either replace your current credential or create an additional credential in the Hive namespace. The HiveConfig custom resource needs to include the following content: An AWS GovCloud credential that has the required permissions to provision resources for the given VPC. The addresses of the VPCs for the OpenShift Container Platform cluster installation, as well as the service endpoints for the managed cluster. Best practice: Use different VPCs for the OpenShift Container Platform cluster installation and the service endpoints. The following example shows the credential content: spec: awsPrivateLink: ## The list of inventory of VPCs that can be used to create VPC ## endpoints by the controller. endpointVPCInventory: - region: us-east-1 vpcID: vpc-1 subnets: - availabilityZone: us-east-1a subnetID: subnet-11 - availabilityZone: us-east-1b subnetID: subnet-12 - availabilityZone: us-east-1c subnetID: subnet-13 - availabilityZone: us-east-1d subnetID: subnet-14 - availabilityZone: us-east-1e subnetID: subnet-15 - availabilityZone: us-east-1f subnetID: subnet-16 - region: us-east-1 vpcID: vpc-2 subnets: - availabilityZone: us-east-1a subnetID: subnet-21 - availabilityZone: us-east-1b subnetID: subnet-22 - availabilityZone: us-east-1c subnetID: subnet-23 - availabilityZone: us-east-1d subnetID: subnet-24 - availabilityZone: us-east-1e subnetID: subnet-25 - availabilityZone: us-east-1f subnetID: subnet-26 ## The credentialsSecretRef references a secret with permissions to create. ## The resources in the account where the inventory of VPCs exist. credentialsSecretRef: name: <hub-account-credentials-secret-name> ## A list of VPC where various mce clusters exists. associatedVPCs: - region: region-mce1 vpcID: vpc-mce1 credentialsSecretRef: name: <credentials-that-have-access-to-account-where-MCE1-VPC-exists> - region: region-mce2 vpcID: vpc-mce2 credentialsSecretRef: name: <credentials-that-have-access-to-account-where-MCE2-VPC-exists> You can include a VPC from all the regions where AWS PrivateLink is supported in the endpointVPCInventory list. The controller selects a VPC that meets the requirements for the ClusterDeployment. For more information, refer to the Hive documentation . 1.7.3.4.3. Creating your cluster with the console To create a cluster from the console, navigate to Infrastructure > Clusters > Create cluster AWS > Standalone and complete the steps in the console. Note: This procedure is for creating a cluster. If you have an existing cluster that you want to import, see Cluster import for those steps. The credential that you select must have access to the resources in an AWS GovCloud region, if you create an AWS GovCloud cluster. You can use an AWS GovCloud secret that is already in the Hive namespace if it has the required permissions to deploy a cluster. Existing credentials are displayed in the console. If you need to create a credential, see Creating a credential for Amazon Web Services for more information. The name of the cluster is used in the hostname of the cluster. Important: When you create a cluster, the controller creates a namespace for the cluster and its resources. Ensure that you include only resources for that cluster instance in that namespace. Destroying the cluster deletes the namespace and all of the resources in it. Tip: Select YAML: On to view content updates as you enter the information in the console. If you want to add your cluster to an existing cluster set, you must have the correct permissions on the cluster set to add it. If you do not have cluster-admin privileges when you are creating the cluster, you must select a cluster set on which you have clusterset-admin permissions. If you do not have the correct permissions on the specified cluster set, the cluster creation fails. Contact your cluster administrator to provide you with clusterset-admin permissions to a cluster set if you do not have any cluster set options to select. Every managed cluster must be associated with a managed cluster set. If you do not assign the managed cluster to a ManagedClusterSet , it is automatically added to the default managed cluster set. If there is already a base DNS domain that is associated with the selected credential that you configured with your AWS or AWS GovCloud account, that value is populated in the field. You can change the value by overwriting it. This name is used in the hostname of the cluster. See Configuring an AWS account for more information. The release image identifies the version of the OpenShift Container Platform image that is used to create the cluster. If the version that you want to use is available, you can select the image from the list of images. If the image that you want to use is not a standard image, you can enter the URL to the image that you want to use. See Release images for more information about release images. The node pools include the control plane pool and the worker pools. The control plane nodes share the management of the cluster activity. The information includes the following fields: Region: The region where you create your cluster resources. If you are creating a cluster on an AWS GovCloud provider, you must include an AWS GovCloud region for your node pools. For example, us-gov-west-1 . CPU architecture: If the architecture type of the managed cluster is not the same as the architecture of your hub cluster, enter a value for the instruction set architecture of the machines in the pool. Valid values are amd64 , ppc64le , s390x , and arm64 . Zones: Specify where you want to run your control plane pools. You can select multiple zones within the region for a more distributed group of control plane nodes. A closer zone might provide faster performance, but a more distant zone might be more distributed. Instance type: Specify the instance type for your control plane node, which must be the same as the CPU architecture that you previously indicated. You can change the type and size of your instance after it is created. Root storage: Specify the amount of root storage to allocate for the cluster. You can create zero or more worker nodes in a worker pool to run the container workloads for the cluster. They can be in a single worker pool, or distributed across multiple worker pools. If zero worker nodes are specified, the control plane nodes also function as worker nodes. The optional information includes the following fields: Pool name: Provide a unique name for your pool. Zones: Specify where you want to run your worker pools. You can select multiple zones within the region for a more distributed group of nodes. A closer zone might provide faster performance, but a more distant zone might be more distributed. Instance type: Specify the instance type of your worker pools. You can change the type and size of your instance after it is created. Node count: Specify the node count of your worker pool. This setting is required when you define a worker pool. Root storage: Specify the amount of root storage allocated for your worker pool. This setting is required when you define a worker pool. Networking details are required for your cluster, and multiple networks are required for using IPv6. For an AWS GovCloud cluster, enter the values of the block of addresses of the Hive VPC in the Machine CIDR field. You can add an additional network by clicking Add network . Proxy information that is provided in the credential is automatically added to the proxy fields. You can use the information as it is, overwrite it, or add the information if you want to enable a proxy. The following list contains the required information for creating a proxy: HTTP proxy URL: Specify the URL that should be used as a proxy for HTTP traffic. HTTPS proxy URL: Specify the secure proxy URL that should be used for HTTPS traffic. If no value is provided, the same value as the HTTP Proxy URL is used for both HTTP and HTTPS . No proxy domains: A comma-separated list of domains that should bypass the proxy. Begin a domain name with a period . to include all of the subdomains that are in that domain. Add an asterisk * to bypass the proxy for all destinations. Additional trust bundle: One or more additional CA certificates that are required for proxying HTTPS connections. When creating an AWS GovCloud cluster or using a private environment, complete the fields on the AWS private configuration page with the AMI ID and the subnet values. Ensure that the value of spec:platform:aws:privateLink:enabled is set to true in the ClusterDeployment.yaml file, which is automatically set when you select Use private configuration . When you review your information and optionally customize it before creating the cluster, you can select YAML: On to view the install-config.yaml file content in the panel. You can edit the YAML file with your custom settings, if you have any updates. Note: You do not have to run the oc command that is provided with the cluster details to import the cluster. When you create the cluster, it is automatically configured under the management of multicluster engine for Kubernetes operator. If you are using Red Hat Advanced Cluster Management for Kubernetes and want to configure your managed cluster klusterlet to run on specific nodes, see Optional: Configuring the klusterlet to run on specific nodes for the required steps. Continue with Accessing your cluster for instructions for accessing your cluster. 1.7.3.5. Creating a cluster on Microsoft Azure You can use the multicluster engine operator console to deploy a Red Hat OpenShift Container Platform cluster on Microsoft Azure or on Microsoft Azure Government. When you create a cluster, the creation process uses the OpenShift Container Platform installer with the Hive resource. If you have questions about cluster creation after completing this procedure, see Installing on Azure in the OpenShift Container Platform documentation for more information about the process. Prerequisites Creating your cluster with the console 1.7.3.5.1. Prerequisites See the following prerequisites before creating a cluster on Azure: You must have a deployed hub cluster. You need an Azure credential. See Creating a credential for Microsoft Azure for more information. You need a configured domain in Azure or Azure Government. See Configuring a custom domain name for an Azure cloud service for instructions on how to configure a domain. You need Azure login credentials, which include user name and password. See the Microsoft Azure Portal . You need Azure service principals, which include clientId , clientSecret , and tenantId . See azure.microsoft.com . You need an OpenShift Container Platform image pull secret. See Using image pull secrets . Note: If you change your cloud provider access key on the cloud provider, you also need to manually update the corresponding credential for the cloud provider on the console of multicluster engine operator. This is required when your credentials expire on the cloud provider where the managed cluster is hosted and you try to delete the managed cluster. 1.7.3.5.2. Creating your cluster with the console To create a cluster from the multicluster engine operator console, navigate to Infrastructure > Clusters . On the Clusters page, click Create cluster and complete the steps in the console. Note: This procedure is for creating a cluster. If you have an existing cluster that you want to import, see Cluster import for those steps. If you need to create a credential, see Creating a credential for Microsoft Azure for more information. The name of the cluster is used in the hostname of the cluster. Important: When you create a cluster, the controller creates a namespace for the cluster and its resources. Ensure that you include only resources for that cluster instance in that namespace. Destroying the cluster deletes the namespace and all of the resources in it. Tip: Select YAML: On to view content updates as you enter the information in the console. If you want to add your cluster to an existing cluster set, you must have the correct permissions on the cluster set to add it. If you do not have cluster-admin privileges when you are creating the cluster, you must select a cluster set on which you have clusterset-admin permissions. If you do not have the correct permissions on the specified cluster set, the cluster creation fails. Contact your cluster administrator to provide you with clusterset-admin permissions to a cluster set if you do not have any cluster set options to select. Every managed cluster must be associated with a managed cluster set. If you do not assign the managed cluster to a ManagedClusterSet , it is automatically added to the default managed cluster set. If there is already a base DNS domain that is associated with the selected credential that you configured for your Azure account, that value is populated in that field. You can change the value by overwriting it. See Configuring a custom domain name for an Azure cloud service for more information. This name is used in the hostname of the cluster. The release image identifies the version of the OpenShift Container Platform image that is used to create the cluster. If the version that you want to use is available, you can select the image from the list of images. If the image that you want to use is not a standard image, you can enter the URL to the image that you want to use. See Release images for more information about release images. The Node pools include the control plane pool and the worker pools. The control plane nodes share the management of the cluster activity. The information includes the following optional fields: Region: Specify a region where you want to run your node pools. You can select multiple zones within the region for a more distributed group of control plane nodes. A closer zone might provide faster performance, but a more distant zone might be more distributed. CPU architecture: If the architecture type of the managed cluster is not the same as the architecture of your hub cluster, enter a value for the instruction set architecture of the machines in the pool. Valid values are amd64 , ppc64le , s390x , and arm64 . You can change the type and size of the Instance type and Root storage allocation (required) of your control plane pool after your cluster is created. You can create one or more worker nodes in a worker pool to run the container workloads for the cluster. They can be in a single worker pool, or distributed across multiple worker pools. If zero worker nodes are specified, the control plane nodes also function as worker nodes. The information includes the following fields: Zones: Specifies here you want to run your worker pools. You can select multiple zones within the region for a more distributed group of nodes. A closer zone might provide faster performance, but a more distant zone might be more distributed. Instance type: You can change the type and size of your instance after it is created. You can add an additional network by clicking Add network . You must have more than one network if you are using IPv6 addresses. Proxy information that is provided in the credential is automatically added to the proxy fields. You can use the information as it is, overwrite it, or add the information if you want to enable a proxy. The following list contains the required information for creating a proxy: HTTP proxy: The URL that should be used as a proxy for HTTP traffic. HTTPS proxy: The secure proxy URL that should be used for HTTPS traffic. If no value is provided, the same value as the HTTP Proxy URL is used for both HTTP and HTTPS . No proxy: A comma-separated list of domains that should bypass the proxy. Begin a domain name with a period . to include all of the subdomains that are in that domain. Add an asterisk * to bypass the proxy for all destinations. Additional trust bundle: One or more additional CA certificates that are required for proxying HTTPS connections. When you review your information and optionally customize it before creating the cluster, you can click the YAML switch On to view the install-config.yaml file content in the panel. You can edit the YAML file with your custom settings, if you have any updates. If you are using Red Hat Advanced Cluster Management for Kubernetes and want to configure your managed cluster klusterlet to run on specific nodes, see Optional: Configuring the klusterlet to run on specific nodes for the required steps. Note: You do not have to run the oc command that is provided with the cluster details to import the cluster. When you create the cluster, it is automatically configured under the management of multicluster engine operator. Continue with Accessing your cluster for instructions for accessing your cluster. 1.7.3.6. Creating a cluster on Google Cloud Platform Follow the procedure to create a Red Hat OpenShift Container Platform cluster on Google Cloud Platform (GCP). For more information about GCP, see Google Cloud Platform . When you create a cluster, the creation process uses the OpenShift Container Platform installer with the Hive resource. If you have questions about cluster creation after completing this procedure, see Installing on GCP in the OpenShift Container Platform documentation for more information about the process. Prerequisites Creating your cluster with the console 1.7.3.6.1. Prerequisites See the following prerequisites before creating a cluster on GCP: You must have a deployed hub cluster. You must have a GCP credential. See Creating a credential for Google Cloud Platform for more information. You must have a configured domain in GCP. See Setting up a custom domain for instructions on how to configure a domain. You need your GCP login credentials, which include user name and password. You must have an OpenShift Container Platform image pull secret. See Using image pull secrets . Note: If you change your cloud provider access key on the cloud provider, you also need to manually update the corresponding credential for the cloud provider on the console of multicluster engine operator. This is required when your credentials expire on the cloud provider where the managed cluster is hosted and you try to delete the managed cluster. 1.7.3.6.2. Creating your cluster with the console To create clusters from the multicluster engine operator console, navigate to Infrastructure > Clusters . On the Clusters page, click Create cluster and complete the steps in the console. Note: This procedure is for creating a cluster. If you have an existing cluster that you want to import, see Cluster import for those steps. If you need to create a credential, see Creating a credential for Google Cloud Platform for more information. The name of your cluster is used in the hostname of the cluster. There are some restrictions that apply to naming your GCP cluster. These restrictions include not beginning the name with goog or containing a group of letters and numbers that resemble google anywhere in the name. See Bucket naming guidelines for the complete list of restrictions. Important: When you create a cluster, the controller creates a namespace for the cluster and its resources. Ensure that you include only resources for that cluster instance in that namespace. Destroying the cluster deletes the namespace and all of the resources in it. Tip: Select YAML: On to view content updates as you enter the information in the console. If you want to add your cluster to an existing cluster set, you must have the correct permissions on the cluster set to add it. If you do not have cluster-admin privileges when you are creating the cluster, you must select a cluster set on which you have clusterset-admin permissions. If you do not have the correct permissions on the specified cluster set, the cluster creation fails. Contact your cluster administrator to provide you with clusterset-admin permissions to a cluster set if you do not have any cluster set options to select. Every managed cluster must be associated with a managed cluster set. If you do not assign the managed cluster to a ManagedClusterSet , it is automatically added to the default managed cluster set. If there is already a base DNS domain that is associated with the selected credential for your GCP account, that value is populated in the field. You can change the value by overwriting it. See Setting up a custom domain for more information. This name is used in the hostname of the cluster. The release image identifies the version of the OpenShift Container Platform image that is used to create the cluster. If the version that you want to use is available, you can select the image from the list of images. If the image that you want to use is not a standard image, you can enter the URL to the image that you want to use. See Release images for more information about release images. The Node pools include the control plane pool and the worker pools. The control plane nodes share the management of the cluster activity. The information includes the following fields: Region: Specify a region where you want to run your control plane pools. A closer region might provide faster performance, but a more distant region might be more distributed. CPU architecture: If the architecture type of the managed cluster is not the same as the architecture of your hub cluster, enter a value for the instruction set architecture of the machines in the pool. Valid values are amd64 , ppc64le , s390x , and arm64 . You can specify the instance type of your control plane pool. You can change the type and size of your instance after it is created. You can create one or more worker nodes in a worker pool to run the container workloads for the cluster. They can be in a single worker pool, or distributed across multiple worker pools. If zero worker nodes are specified, the control plane nodes also function as worker nodes. The information includes the following fields: Instance type: You can change the type and size of your instance after it is created. Node count: This setting is required when you define a worker pool. The networking details are required, and multiple networks are required for using IPv6 addresses. You can add an additional network by clicking Add network . Proxy information that is provided in the credential is automatically added to the proxy fields. You can use the information as it is, overwrite it, or add the information if you want to enable a proxy. The following list contains the required information for creating a proxy: HTTP proxy: The URL that should be used as a proxy for HTTP traffic. HTTPS proxy: The secure proxy URL that should be used for HTTPS traffic. If no value is provided, the same value as the HTTP Proxy URL is used for both HTTP and HTTPS . No proxy sites: A comma-separated list of sites that should bypass the proxy. Begin a domain name with a period . to include all of the subdomains that are in that domain. Add an asterisk * to bypass the proxy for all destinations. Additional trust bundle: One or more additional CA certificates that are required for proxying HTTPS connections. When you review your information and optionally customize it before creating the cluster, you can select YAML: On to view the install-config.yaml file content in the panel. You can edit the YAML file with your custom settings, if you have any updates. If you are using Red Hat Advanced Cluster Management for Kubernetes and want to configure your managed cluster klusterlet to run on specific nodes, see Optional: Configuring the klusterlet to run on specific nodes for the required steps. Note: You do not have to run the oc command that is provided with the cluster details to import the cluster. When you create the cluster, it is automatically configured under the management of multicluster engine operator. Continue with Accessing your cluster for instructions for accessing your cluster. 1.7.3.7. Creating a cluster on VMware vSphere You can use the multicluster engine operator console to deploy a Red Hat OpenShift Container Platform cluster on VMware vSphere. When you create a cluster, the creation process uses the OpenShift Container Platform installer with the Hive resource. If you have questions about cluster creation after completing this procedure, see Installing on vSphere in the OpenShift Container Platform documentation for more information about the process. Prerequisites Creating your cluster with the console 1.7.3.7.1. Prerequisites See the following prerequisites before creating a cluster on vSphere: You must have a hub cluster that is deployed on a supported OpenShift Container Platform version. You need a vSphere credential. See Creating a credential for VMware vSphere for more information. You need an OpenShift Container Platform image pull secret. See Using image pull secrets . You must have the following information for the VMware instance where you are deploying: Required static IP addresses for API and Ingress instances DNS records for: The following API base domain must point to the static API VIP: api.<cluster_name>.<base_domain> The following application base domain must point to the static IP address for Ingress VIP: *.apps.<cluster_name>.<base_domain> 1.7.3.7.2. Creating your cluster with the console To create a cluster from the multicluster engine operator console, navigate to Infrastructure > Clusters . On the Clusters page, click Create cluster and complete the steps in the console. Note: This procedure is for creating a cluster. If you have an existing cluster that you want to import, see Cluster import for those steps. If you need to create a credential, see Creating a credential for VMware vSphere for more information about creating a credential. The name of your cluster is used in the hostname of the cluster. Important: When you create a cluster, the controller creates a namespace for the cluster and its resources. Ensure that you include only resources for that cluster instance in that namespace. Destroying the cluster deletes the namespace and all of the resources in it. Tip: Select YAML: On to view content updates as you enter the information in the console. If you want to add your cluster to an existing cluster set, you must have the correct permissions on the cluster set to add it. If you do not have cluster-admin privileges when you are creating the cluster, you must select a cluster set on which you have clusterset-admin permissions. If you do not have the correct permissions on the specified cluster set, the cluster creation fails. Contact your cluster administrator to provide you with clusterset-admin permissions to a cluster set if you do not have any cluster set options to select. Every managed cluster must be associated with a managed cluster set. If you do not assign the managed cluster to a ManagedClusterSet , it is automatically added to the default managed cluster set. If there is already a base domain associated with the selected credential that you configured for your vSphere account, that value is populated in the field. You can change the value by overwriting it. See Installing a cluster on vSphere with customizations for more information. This value must match the name that you used to create the DNS records listed in the prerequisites section. This name is used in the hostname of the cluster. The release image identifies the version of the OpenShift Container Platform image that is used to create the cluster. If the version that you want to use is available, you can select the image from the list of images. If the image that you want to use is not a standard image, you can enter the URL to the image that you want to use. See Release images for more information about release images. Note: Release images for OpenShift Container Platform versions 4.15 and later are supported. The node pools include the control plane pool and the worker pools. The control plane nodes share the management of the cluster activity. The information includes the CPU architecture field. View the following field description: CPU architecture: If the architecture type of the managed cluster is not the same as the architecture of your hub cluster, enter a value for the instruction set architecture of the machines in the pool. Valid values are amd64 , ppc64le , s390x , and arm64 . You can create one or more worker nodes in a worker pool to run the container workloads for the cluster. They can be in a single worker pool, or distributed across multiple worker pools. If zero worker nodes are specified, the control plane nodes also function as worker nodes. The information includes Cores per socket , CPUs , Memory_min MiB, _Disk size in GiB, and Node count . Networking information is required. Multiple networks are required for using IPv6. Some of the required networking information is included the following fields: vSphere network name: Specify the VMware vSphere network name. API VIP: Specify the IP address to use for internal API communication. Note: This value must match the name that you used to create the DNS records listed in the prerequisites section. If not provided, the DNS must be pre-configured so that api. resolves correctly. Ingress VIP: Specify the IP address to use for ingress traffic. Note: This value must match the name that you used to create the DNS records listed in the prerequisites section. If not provided, the DNS must be pre-configured so that test.apps. resolves correctly. You can add an additional network by clicking Add network . You must have more than one network if you are using IPv6 addresses. Proxy information that is provided in the credential is automatically added to the proxy fields. You can use the information as it is, overwrite it, or add the information if you want to enable a proxy. The following list contains the required information for creating a proxy: HTTP proxy: Specify the URL that should be used as a proxy for HTTP traffic. HTTPS proxy: Specify the secure proxy URL that should be used for HTTPS traffic. If no value is provided, the same value as the HTTP Proxy URL is used for both HTTP and HTTPS . No proxy sites: Provide a comma-separated list of sites that should bypass the proxy. Begin a domain name with a period . to include all of the subdomains that are in that domain. Add an asterisk * to bypass the proxy for all destinations. Additional trust bundle: One or more additional CA certificates that are required for proxying HTTPS connections. You can define the disconnected installation image by clicking Disconnected installation . When creating a cluster by using Red Hat OpenStack Platform provider and disconnected installation, if a certificate is required to access the mirror registry, you must enter it in the Additional trust bundle field in the Configuration for disconnected installation section when configuring your credential or the Disconnected installation section when creating a cluster. You can click Add automation template to create a template. When you review your information and optionally customize it before creating the cluster, you can click the YAML switch On to view the install-config.yaml file content in the panel. You can edit the YAML file with your custom settings, if you have any updates. If you are using Red Hat Advanced Cluster Management for Kubernetes and want to configure your managed cluster klusterlet to run on specific nodes, see Optional: Configuring the klusterlet to run on specific nodes for the required steps. Note: You do not have to run the oc command that is provided with the cluster details to import the cluster. When you create the cluster, it is automatically configured under the management of multicluster engine operator. Continue with Accessing your cluster for instructions for accessing your cluster. 1.7.3.8. Creating a cluster on Red Hat OpenStack Platform You can use the multicluster engine operator console to deploy a Red Hat OpenShift Container Platform cluster on Red Hat OpenStack Platform. When you create a cluster, the creation process uses the OpenShift Container Platform installer with the Hive resource. If you have questions about cluster creation after completing this procedure, see Installing on OpenStack in the OpenShift Container Platform documentation for more information about the process. Prerequisites Creating your cluster with the console 1.7.3.8.1. Prerequisites See the following prerequisites before creating a cluster on Red Hat OpenStack Platform: You must have a hub cluster that is deployed on OpenShift Container Platform version 4.6 or later. You must have a Red Hat OpenStack Platform credential. See Creating a credential for Red Hat OpenStack Platform for more information. You need an OpenShift Container Platform image pull secret. See Using image pull secrets . You need the following information for the Red Hat OpenStack Platform instance where you are deploying: Flavor name for the control plane and worker instances; for example, m1.xlarge Network name for the external network to provide the floating IP addresses Required floating IP addresses for API and ingress instances DNS records for: The following API base domain must point to the floating IP address for the API: api.<cluster_name>.<base_domain> The following application base domain must point to the floating IP address for ingress:app-name: *.apps.<cluster_name>.<base_domain> 1.7.3.8.2. Creating your cluster with the console To create a cluster from the multicluster engine operator console, navigate to Infrastructure > Clusters . On the Clusters page, click Create cluster and complete the steps in the console. Note: This procedure is for creating a cluster. If you have an existing cluster that you want to import, see Cluster import for those steps. If you need to create a credential, see Creating a credential for Red Hat OpenStack Platform for more information. The name of the cluster is used in the hostname of the cluster. The name must contain fewer than 15 characters. This value must match the name that you used to create the DNS records listed in the credential prerequisites section. Important: When you create a cluster, the controller creates a namespace for the cluster and its resources. Ensure that you include only resources for that cluster instance in that namespace. Destroying the cluster deletes the namespace and all of the resources in it. Tip: Select YAML: On to view content updates as you enter the information in the console. If you want to add your cluster to an existing cluster set, you must have the correct permissions on the cluster set to add it. If you do not have cluster-admin privileges when you are creating the cluster, you must select a cluster set on which you have clusterset-admin permissions. If you do not have the correct permissions on the specified cluster set, the cluster creation fails. Contact your cluster administrator to provide you with clusterset-admin permissions to a cluster set if you do not have any cluster set options to select. Every managed cluster must be associated with a managed cluster set. If you do not assign the managed cluster to a ManagedClusterSet , it is automatically added to the default managed cluster set. If there is already a base DNS domain that is associated with the selected credential that you configured for your Red Hat OpenStack Platform account, that value is populated in the field. You can change the value by overwriting it. See Managing domains in the Red Hat OpenStack Platform documentation for more information. This name is used in the hostname of the cluster. The release image identifies the version of the OpenShift Container Platform image that is used to create the cluster. If the version that you want to use is available, you can select the image from the list of images. If the image that you want to use is not a standard image, you can enter the URL to the image that you want to use. See Release images for more information about release images. Only release images for OpenShift Container Platform versions 4.6.x and higher are supported. The node pools include the control plane pool and the worker pools. The control plane nodes share the management of the cluster activity. If the architecture type of the managed cluster is not the same as the architecture of your hub cluster, enter a value for the instruction set architecture of the machines in the pool. Valid values are amd64 , ppc64le , s390x , and arm64 . You must add an instance type for your control plane pool, but you can change the type and size of your instance after it is created. You can create one or more worker nodes in a worker pool to run the container workloads for the cluster. They can be in a single worker pool, or distributed across multiple worker pools. If zero worker nodes are specified, the control plane nodes also function as worker nodes. The information includes the following fields: Instance type: You can change the type and size of your instance after it is created. Node count: Specify the node count for your worker pool. This setting is required when you define a worker pool. Networking details are required for your cluster. You must provide the values for one or more networks for an IPv4 network. For an IPv6 network, you must define more than one network. You can add an additional network by clicking Add network . You must have more than one network if you are using IPv6 addresses. Proxy information that is provided in the credential is automatically added to the proxy fields. You can use the information as it is, overwrite it, or add the information if you want to enable a proxy. The following list contains the required information for creating a proxy: HTTP proxy: Specify the URL that should be used as a proxy for HTTP traffic. HTTPS proxy: The secure proxy URL that should be used for HTTPS traffic. If no value is provided, the same value as the HTTP Proxy is used for both HTTP and HTTPS . No proxy: Define a comma-separated list of sites that should bypass the proxy. Begin a domain name with a period . to include all of the subdomains that are in that domain. Add an asterisk * to bypass the proxy for all destinations. Additional trust bundle: One or more additional CA certificates that are required for proxying HTTPS connections. You can define the disconnected installation image by clicking Disconnected installation . When creating a cluster by using Red Hat OpenStack Platform provider and disconnected installation, if a certificate is required to access the mirror registry, you must enter it in the Additional trust bundle field in the Configuration for disconnected installation section when configuring your credential or the Disconnected installation section when creating a cluster. When you review your information and optionally customize it before creating the cluster, you can click the YAML switch On to view the install-config.yaml file content in the panel. You can edit the YAML file with your custom settings, if you have any updates. When creating a cluster that uses an internal certificate authority (CA), you need to customize the YAML file for your cluster by completing the following steps: With the YAML switch on at the review step, insert a Secret object at the top of the list with the CA certificate bundle. Note: If the Red Hat OpenStack Platform environment provides services using certificates signed by multiple authorities, the bundle must include the certificates to validate all of the required endpoints. The addition for a cluster named ocp3 resembles the following example: apiVersion: v1 kind: Secret type: Opaque metadata: name: ocp3-openstack-trust namespace: ocp3 stringData: ca.crt: | -----BEGIN CERTIFICATE----- <Base64 certificate contents here> -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- <Base64 certificate contents here> -----END CERTIFICATE---- Modify the Hive ClusterDeployment object to specify the value of certificatesSecretRef in spec.platform.openstack , similar to the following example: platform: openstack: certificatesSecretRef: name: ocp3-openstack-trust credentialsSecretRef: name: ocp3-openstack-creds cloud: openstack The example assumes that the cloud name in the clouds.yaml file is openstack . If you are using Red Hat Advanced Cluster Management for Kubernetes and want to configure your managed cluster klusterlet to run on specific nodes, see Optional: Configuring the klusterlet to run on specific nodes for the required steps. Note: You do not have to run the oc command that is provided with the cluster details to import the cluster. When you create the cluster, it is automatically configured under the management of multicluster engine operator. Continue with Accessing your cluster for instructions for accessing your cluster. 1.7.3.9. Creating a cluster in an on-premises environment You can use the console to create on-premises Red Hat OpenShift Container Platform clusters. The clusters can be single-node OpenShift clusters, multi-node clusters, and compact three-node clusters on VMware vSphere, Red Hat OpenStack, Nutanix, or in a bare metal environment. There is no platform integration with the platform where you install the cluster, as the platform value is set to platform=none . A single-node OpenShift cluster contains only a single node, which hosts the control plane services and the user workloads. This configuration can be helpful when you want to minimize the resource footprint of the cluster. You can also provision multiple single-node OpenShift clusters on edge resources by using the zero touch provisioning feature, which is a feature that is available with Red Hat OpenShift Container Platform. For more information about zero touch provisioning, see Clusters at the network far edge in the OpenShift Container Platform documentation. Prerequisites Creating your cluster with the console Creating your cluster with the command line 1.7.3.9.1. Prerequisites See the following prerequisites before creating a cluster in an on-premises environment: You must have a deployed hub cluster on a supported OpenShift Container Platform version. You need a configured infrastructure environment with a host inventory of configured hosts. You must have internet access for your hub cluster (connected), or a connection to an internal or mirror registry that has a connection to the internet (disconnected) to retrieve the required images for creating the cluster. You need a configured on-premises credential. You need an OpenShift Container Platform image pull secret. See Using image pull secrets . You need the following DNS records: The following API base domain must point to the static API VIP: The following application base domain must point to the static IP address for Ingress VIP: Review the hub cluster KubeAPIServer certificate verification strategy to make sure that the default UseAutoDetectedCABundle strategy works. If you need to manually change the strategy, see Configuring the hub cluster KubeAPIServer verification strategy . 1.7.3.9.2. Creating your cluster with the console To create a cluster from the console, complete the following steps: Navigate to Infrastructure > Clusters . On the Clusters page, click Create cluster and complete the steps in the console. Select Host inventory as the type of cluster. The following options are available for your assisted installation: Use existing discovered hosts : Select your hosts from a list of hosts that are in an existing host inventory. Discover new hosts : Discover hosts that are not already in an existing infrastructure environment. Discover your own hosts, rather than using one that is already in an infrastructure environment. If you need to create a credential, see Creating a credential for an on-premises environment for more information. The name for your cluster is used in the hostname of the cluster. Important: When you create a cluster, the controller creates a namespace for the cluster and its resources. Ensure that you include only resources for that cluster instance in that namespace. Destroying the cluster deletes the namespace and all of the resources in it. Note: Select YAML: On to view content updates as you enter the information in the console. If you want to add your cluster to an existing cluster set, you must have the correct permissions on the cluster set to add it. If you do not have cluster-admin privileges when you are creating the cluster, you must select a cluster set on which you have clusterset-admin permissions. If you do not have the correct permissions on the specified cluster set, the cluster creation fails. Contact your cluster administrator to provide you with clusterset-admin permissions to a cluster set if you do not have any cluster set options to select. Every managed cluster must be associated with a managed cluster set. If you do not assign the managed cluster to a ManagedClusterSet , it is automatically added to the default managed cluster set. If there is already a base DNS domain that is associated with the selected credential that you configured for your provider account, that value is populated in that field. You can change the value by overwriting it, but this setting cannot be changed after the cluster is created. The base domain of your provider is used to create routes to your Red Hat OpenShift Container Platform cluster components. It is configured in the DNS of your cluster provider as a Start of Authority (SOA) record. The OpenShift version identifies the version of the OpenShift Container Platform image that is used to create the cluster. If the version that you want to use is available, you can select the image from the list of images. If the image that you want to use is not a standard image, you can enter the URL to the image that you want to use. See Release images to learn more. When you select a supported OpenShift Container Platform version, an option to select Install single-node OpenShift is displayed. A single-node OpenShift cluster contains a single node which hosts the control plane services and the user workloads. See Scaling hosts to an infrastructure environment to learn more about adding nodes to a single-node OpenShift cluster after it is created. If you want your cluster to be a single-node OpenShift cluster, select the single-node OpenShift option. You can add additional workers to single-node OpenShift clusters by completing the following steps: From the console, navigate to Infrastructure > Clusters and select the name of the cluster that you created or want to access. Select Actions > Add hosts to add additional workers. Note: The single-node OpenShift control plane requires 8 CPU cores, while a control plane node for a multinode control plane cluster only requires 4 CPU cores. After you review and save the cluster, your cluster is saved as a draft cluster. You can close the creation process and finish the process later by selecting the cluster name on the Clusters page. If you are using existing hosts, select whether you want to select the hosts yourself, or if you want them to be selected automatically. The number of hosts is based on the number of nodes that you selected. For example, a single-node OpenShift cluster only requires one host, while a standard three-node cluster requires three hosts. The locations of the available hosts that meet the requirements for this cluster are displayed in the list of Host locations . For distribution of the hosts and a more high-availability configuration, select multiple locations. If you are discovering new hosts with no existing infrastructure environment, complete the steps in Adding hosts to the host inventory by using the Discovery Image . After the hosts are bound, and the validations pass, complete the networking information for your cluster by adding the following IP addresses: API VIP: Specifies the IP address to use for internal API communication. Note: This value must match the name that you used to create the DNS records listed in the prerequisites section. If not provided, the DNS must be pre-configured so that api. resolves correctly. Ingress VIP: Specifies the IP address to use for ingress traffic. Note: This value must match the name that you used to create the DNS records listed in the prerequisites section. If not provided, the DNS must be pre-configured so that test.apps. resolves correctly. If you are using Red Hat Advanced Cluster Management for Kubernetes and want to configure your managed cluster klusterlet to run on specific nodes, see Optional: Configuring the klusterlet to run on specific nodes for the required steps. You can view the status of the installation on the Clusters navigation page. Continue with Accessing your cluster for instructions for accessing your cluster. 1.7.3.9.3. Creating your cluster with the command line You can also create a cluster without the console by using the Assisted Installer feature within the central infrastructure management component. After you complete this procedure, you can boot the host from the discovery image that is generated. The order of the procedures is generally not important, but is noted when there is a required order. 1.7.3.9.3.1. Create the namespace You need a namespace for your resources. It is more convenient to keep all of the resources in a shared namespace. This example uses sample-namespace for the name of the namespace, but you can use any name except assisted-installer . Create a namespace by creating and applying the following file: apiVersion: v1 kind: Namespace metadata: name: sample-namespace 1.7.3.9.3.2. Add the pull secret to the namespace Add your pull secret to your namespace by creating and applying the following custom resource: apiVersion: v1 kind: Secret type: kubernetes.io/dockerconfigjson metadata: name: <pull-secret> namespace: sample-namespace stringData: .dockerconfigjson: 'your-pull-secret-json' 1 1 1 Add the content of the pull secret. For example, this can include a cloud.openshift.com , quay.io , or registry.redhat.io authentication. 1.7.3.9.3.3. Generate a ClusterImageSet Generate a CustomImageSet to specify the version of OpenShift Container Platform for your cluster by creating and applying the following custom resource: apiVersion: hive.openshift.io/v1 kind: ClusterImageSet metadata: name: openshift-v4.15.0 spec: releaseImage: quay.io/openshift-release-dev/ocp-release:4.15.0-rc.0-x86_64 Note: You need to create a multi-architecture ClusterImageSet if you install a managed cluster that has a different architecture than the hub cluster. To learn more, see Creating a release image to deploy a cluster on a different architecture . 1.7.3.9.3.4. Create the ClusterDeployment custom resource The ClusterDeployment custom resource definition is an API that controls the lifecycle of the cluster. It references the AgentClusterInstall custom resource in the spec.ClusterInstallRef setting which defines the cluster parameters. Create and apply a ClusterDeployment custom resource based on the following example: apiVersion: hive.openshift.io/v1 kind: ClusterDeployment metadata: name: single-node namespace: demo-worker4 spec: baseDomain: hive.example.com clusterInstallRef: group: extensions.hive.openshift.io kind: AgentClusterInstall name: test-agent-cluster-install 1 version: v1beta1 clusterName: test-cluster controlPlaneConfig: servingCertificates: {} platform: agentBareMetal: agentSelector: matchLabels: location: internal pullSecretRef: name: <pull-secret> 2 1 Use the name of your AgentClusterInstall resource. 2 Use the pull secret that you downloaded in Add the pull secret to the namespace . 1.7.3.9.3.5. Create the AgentClusterInstall custom resource In the AgentClusterInstall custom resource, you can specify many of the requirements for the clusters. For example, you can specify the cluster network settings, platform, number of control planes, and worker nodes. Create and add the a custom resource that resembles the following example: apiVersion: extensions.hive.openshift.io/v1beta1 kind: AgentClusterInstall metadata: name: test-agent-cluster-install namespace: demo-worker4 spec: platformType: BareMetal 1 clusterDeploymentRef: name: single-node 2 imageSetRef: name: openshift-v4.15.0 3 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 192.168.111.0/24 serviceNetwork: - 172.30.0.0/16 provisionRequirements: controlPlaneAgents: 1 sshPublicKey: ssh-rsa <your-public-key-here> 4 1 Specify the platform type of the environment where the cluster is created. Valid values are: BareMetal , None , VSphere , Nutanix , or External . 2 Use the same name that you used for your ClusterDeployment resource. 3 Use the ClusterImageSet that you generated in Generate a ClusterImageSet . 4 You can specify your SSH public key, which enables you to access the host after it is installed. 1.7.3.9.3.6. Optional: Create the NMStateConfig custom resource The NMStateConfig custom resource is only required if you have a host-level network configuration, such as static IP addresses. If you include this custom resource, you must complete this step before creating an InfraEnv custom resource. The NMStateConfig is referred to by the values for spec.nmStateConfigLabelSelector in the InfraEnv custom resource. Create and apply your NMStateConfig custom resource, which resembles the following example. Replace values where needed: apiVersion: agent-install.openshift.io/v1beta1 kind: NMStateConfig metadata: name: <mynmstateconfig> namespace: <demo-worker4> labels: demo-nmstate-label: <value> spec: config: interfaces: - name: eth0 type: ethernet state: up mac-address: 02:00:00:80:12:14 ipv4: enabled: true address: - ip: 192.168.111.30 prefix-length: 24 dhcp: false - name: eth1 type: ethernet state: up mac-address: 02:00:00:80:12:15 ipv4: enabled: true address: - ip: 192.168.140.30 prefix-length: 24 dhcp: false dns-resolver: config: server: - 192.168.126.1 routes: config: - destination: 0.0.0.0/0 -hop-address: 192.168.111.1 -hop-interface: eth1 table-id: 254 - destination: 0.0.0.0/0 -hop-address: 192.168.140.1 -hop-interface: eth1 table-id: 254 interfaces: - name: "eth0" macAddress: "02:00:00:80:12:14" - name: "eth1" macAddress: "02:00:00:80:12:15" Note: You must include the demo-nmstate-label label name and value in the InfraEnv resource spec.nmStateConfigLabelSelector.matchLabels field. 1.7.3.9.3.7. Create the InfraEnv custom resource The InfraEnv custom resource provides the configuration to create the discovery ISO. Within this custom resource, you identify values for proxy settings, ignition overrides, and specify NMState labels. The value of spec.nmStateConfigLabelSelector in this custom resource references the NMStateConfig custom resource. Note: If you plan to include the optional NMStateConfig custom resource, you must reference it in the InfraEnv custom resource. If you create the InfraEnv custom resource before you create the NMStateConfig custom resource edit the InfraEnv custom resource to reference the NMStateConfig custom resource and download the ISO after the reference is added. Create and apply the following custom resource: apiVersion: agent-install.openshift.io/v1beta1 kind: InfraEnv metadata: name: myinfraenv namespace: demo-worker4 spec: clusterRef: name: single-node 1 namespace: demo-worker4 2 pullSecretRef: name: pull-secret sshAuthorizedKey: <your_public_key_here> nmStateConfigLabelSelector: matchLabels: demo-nmstate-label: value proxy: httpProxy: http://USERNAME:[email protected]:PORT httpsProxy: https://USERNAME:[email protected]:PORT noProxy: .example.com,172.22.0.0/24,10.10.0.0/24 1 Replace the clusterDeployment resource name from Create the ClusterDeployment . 2 Replace the clusterDeployment resource namespace from Create the ClusterDeployment . 1.7.3.9.3.7.1. InfraEnv field table Field Optional or required Description sshAuthorizedKey Optional You can specify your SSH public key, which enables you to access the host when it is booted from the discovery ISO image. nmStateConfigLabelSelector Optional Consolidates advanced network configuration such as static IPs, bridges, and bonds for the hosts. The host network configuration is specified in one or more NMStateConfig resources with labels you choose. The nmStateConfigLabelSelector property is a Kubernetes label selector that matches your chosen labels. The network configuration for all NMStateConfig labels that match this label selector is included in the Discovery Image. When you boot, each host compares each configuration to its network interfaces and applies the appropriate configuration. proxy Optional You can specify proxy settings required by the host during discovery in the proxy section. Note: When provisioning with IPv6, you cannot define a CIDR address block in the noProxy settings. You must define each address separately. 1.7.3.9.3.8. Boot the host from the discovery image The remaining steps explain how to boot the host from the discovery ISO image that results from the procedures. Download the discovery image from the namespace by running the following command: Move the discovery image to virtual media, a USB drive, or another storage location and boot the host from the discovery image that you downloaded. The Agent resource is created automatically. It is registered to the cluster and represents a host that booted from a discovery image. Approve the Agent custom resource and start the installation by running the following command: Replace the agent name and UUID with your values. You can confirm that it was approved when the output of the command includes an entry for the target cluster that includes a value of true for the APPROVED parameter. 1.7.3.9.4. Additional resources For additional steps that are required when creating a cluster on the Nutanix platform with the CLI, see Adding hosts on Nutanix with the API and Nutanix post-installation configuration in the Red Hat OpenShift Container Platform documentation. For additional information about zero touch provisioning, see Clusters at the network far edge in the OpenShift Container Platform documentation. See Using image pull secrets See Creating a credential for an on-premises environment See Release images See Adding hosts to the host inventory by using the Discovery Image 1.7.3.10. Creating a cluster in a proxy environment You can create a Red Hat OpenShift Container Platform cluster when your hub cluster is connected through a proxy server. One of the following situations must be true for the cluster creation to succeed: multicluster engine operator has a private network connection with the managed cluster that you are creating, with managed cluster access to the Internet by using a proxy. The managed cluster is on an infrastructure provider, but the firewall ports enable communication from the managed cluster to the hub cluster. To create a cluster that is configured with a proxy, complete the following steps: Configure the cluster-wide-proxy setting on the hub cluster by adding the following information to your install-config YAML that is stored in your Secret: apiVersion: v1 kind: Proxy baseDomain: <domain> proxy: httpProxy: http://<username>:<password>@<proxy.example.com>:<port> httpsProxy: https://<username>:<password>@<proxy.example.com>:<port> noProxy: <wildcard-of-domain>,<provisioning-network/CIDR>,<BMC-address-range/CIDR> Replace username with the username for your proxy server. Replace password with the password to access your proxy server. Replace proxy.example.com with the path of your proxy server. Replace port with the communication port with the proxy server. Replace wildcard-of-domain with an entry for domains that should bypass the proxy. Replace provisioning-network/CIDR with the IP address of the provisioning network and the number of assigned IP addresses, in CIDR notation. Replace BMC-address-range/CIDR with the BMC address and the number of addresses, in CIDR notation. Provision the cluster by completing the procedure for creating a cluster. See Creating a cluster to select your provider. Note: You can only use install-config YAML when deploying your cluster. After deploying your cluster, any new changes you make to install-config YAML do not apply. To update the configuration after deployment, you must use policies. See Pod policy for more information. 1.7.3.10.1. Additional resources See Creating clusters to select your provider. See Pod policy to learn how to make configuration changes after deploying your cluster. 1.7.3.11. Configuring AgentClusterInstall proxy The AgentClusterInstall proxy fields determine the proxy settings during installation, and are used to create the cluster-wide proxy resource in the created cluster. 1.7.3.11.1. Configuring AgentClusterInstall To configure the AgentClusterInstall proxy, add the proxy settings to the AgentClusterInstall resource. See the following YAML sample with httpProxy , httpsProxy , and noProxy : apiVersion: extensions.hive.openshift.io/v1beta1 kind: AgentClusterInstall spec: proxy: httpProxy: http://<username>:<password>@<proxy.example.com>:<port> 1 httpsProxy: https://<username>:<password>@<proxy.example.com>:<port> 2 noProxy: <wildcard-of-domain>,<provisioning-network/CIDR>,<BMC-address-range/CIDR> 3 1 httpProxy is the URL of the proxy for HTTP requests. Replace the username and password values with your credentials for your proxy server. Replace proxy.example.com with the path of your proxy server. 2 httpsProxy is the URL of the proxy for HTTPS requests. Replace the values with your credentials. Replace port with the communication port with the proxy server. 3 noProxy is a comma-separated list of domains and CIDRs for which the proxy should not be used. Replace wildcard-of-domain with an entry for domains that should bypass the proxy. Replace provisioning-network/CIDR with the IP address of the provisioning network and the number of assigned IP addresses, in CIDR notation. Replace BMC-address-range/CIDR with the BMC address and the number of addresses, in CIDR notation. 1.7.3.11.2. Additional resources Enabling the central infrastructure management service 1.7.4. Cluster import You can import clusters from different Kubernetes cloud providers. After you import, the target cluster becomes a managed cluster for the multicluster engine operator hub cluster. You can generally complete the import tasks anywhere that you can access the hub cluster and the target managed cluster, unless otherwise specified. A hub cluster cannot manage any other hub cluster, but can manage itself. The hub cluster is configured to automatically be imported and self-managed. You do not need to manually import the hub cluster. If you remove a hub cluster and try to import it again, you must add the local-cluster:true label to the ManagedCluster resource. Important: Cluster lifecycle now supports all providers that are certified through the Cloud Native Computing Foundation (CNCF) Kubernetes Conformance Program. Choose a vendor that is recognized by CNFC for your hybrid cloud multicluster management. See the following information about using CNFC providers: Learn how CNFC providers are certified at Certified Kubernetes Conformance . For Red Hat support information about CNFC third-party providers, see Red Hat support with third party components , or Contact Red Hat support . If you bring your own CNFC conformance certified cluster, you need to change the OpenShift Container Platform CLI oc command to the Kubernetes CLI command, kubectl . Read the following topics to learn more about importing a cluster so that you can manage it: Required user type or access level : Cluster administrator Importing an existing cluster by using the console Importing a managed cluster by using the CLI Importing a managed cluster by using agent registration Importing an on-premises Red Hat OpenShift Container Platform cluster 1.7.4.1. Importing a managed cluster by using the console After you install multicluster engine for Kubernetes operator, you are ready to import a cluster to manage. Continue reading the following topics learn how to import a managed cluster by using the console: Prerequisites Creating a new pull secret Importing a cluster Optional: Configuring the cluster API address Removing a cluster 1.7.4.1.1. Prerequisites A deployed hub cluster. If you are importing bare metal clusters, the hub cluster must be installed on a supported Red Hat OpenShift Container Platform version. A cluster you want to manage. The base64 command line tool. A defined multiclusterhub.spec.imagePullSecret if you are importing a cluster that was not created by OpenShift Container Platform. This secret might have been created when multicluster engine for Kubernetes operator was installed. See Custom image pull secret for more information about how to define this secret. Review the hub cluster KubeAPIServer certificate verification strategy to make sure that the default UseAutoDetectedCABundle strategy works. If you need to manually change the strategy, see Configuring the hub cluster KubeAPIServer verification strategy . Required user type or access level: Cluster administrator 1.7.4.1.2. Creating a new pull secret If you need to create a new pull secret, complete the following steps: Download your Kubernetes pull secret from cloud.redhat.com . Add the pull secret to the namespace of your hub cluster. Run the following command to create a new secret in the open-cluster-management namespace: Replace open-cluster-management with the name of the namespace of your hub cluster. The default namespace of the hub cluster is open-cluster-management . Replace path-to-pull-secret with the path to the pull secret that you downloaded. The secret is automatically copied to the managed cluster when it is imported. Ensure that a previously installed agent is deleted from the cluster that you want to import. You must remove the open-cluster-management-agent and open-cluster-management-agent-addon namespaces to avoid errors. For importing in a Red Hat OpenShift Dedicated environment, see the following notes: You must have the hub cluster deployed in a Red Hat OpenShift Dedicated environment. The default permission in Red Hat OpenShift Dedicated is dedicated-admin, but that does not contain all of the permissions to create a namespace. You must have cluster-admin permissions to import and manage a cluster with multicluster engine operator. 1.7.4.1.3. Importing a cluster You can import existing clusters from the console for each of the available cloud providers. Note: A hub cluster cannot manage a different hub cluster. A hub cluster is set up to automatically import and manage itself, so you do not have to manually import a hub cluster to manage itself. By default, the namespace is used for the cluster name and namespace, but you can change it. Important: When you create a cluster, the controller creates a namespace for the cluster and its resources. Ensure that you include only resources for that cluster instance in that namespace. Destroying the cluster deletes the namespace and all of the resources in it. Every managed cluster must be associated with a managed cluster set. If you do not assign the managed cluster to a ManagedClusterSet , the cluster is automatically added to the default managed cluster set. If you want to add the cluster to a different cluster set, you must have clusterset-admin privileges to the cluster set. If you do not have cluster-admin privileges when you are importing the cluster, you must select a cluster set on which you have clusterset-admin permissions. If you do not have the correct permissions on the specified cluster set, the cluster importing fails. Contact your cluster administrator to provide you with clusterset-admin permissions to a cluster set if you do not have cluster set options to select. If you import a OpenShift Container Platform Dedicated cluster and do not specify a vendor by adding a label for vendor=OpenShiftDedicated , or if you add a label for vendor=auto-detect , a managed-by=platform label is automatically added to the cluster. You can use this added label to identify the cluster as a OpenShift Container Platform Dedicated cluster and retrieve the OpenShift Container Platform Dedicated clusters as a group. The following table provides the available options for import mode , which specifies the method for importing the cluster: Run import commands manually After completing and submitting the information in the console, including any Red Hat Ansible Automation Platform templates, run the provided command on the target cluster to import the cluster. Enter your server URL and API token for the existing cluster Provide the server URL and API token of the cluster that you are importing. You can specify a Red Hat Ansible Automation Platform template to run when the cluster is upgraded. Provide the kubeconfig file Copy and paste the contents of the kubeconfig file of the cluster that you are importing. You can specify a Red Hat Ansible Automation Platform template to run when the cluster is upgraded. Note: You must have the Red Hat Ansible Automation Platform Resource Operator installed from OperatorHub to create and run an Ansible Automation Platform job. To configure a cluster API address, see Optional: Configuring the cluster API address . To configure your managed cluster klusterlet to run on specific nodes, see Optional: Configuring the klusterlet to run on specific nodes . 1.7.4.1.3.1. Optional: Configuring the cluster API address Complete the following steps to optionally configure the Cluster API address that is on the cluster details page by configuring the URL that is displayed in the table when you run the oc get managedcluster command: Log in to your hub cluster with an ID that has cluster-admin permissions. Configure a kubeconfig file for your targeted managed cluster. Edit the managed cluster entry for the cluster that you are importing by running the following command, replacing cluster-name with the name of the managed cluster: Add the ManagedClusterClientConfigs section to the ManagedCluster spec in the YAML file, as shown in the following example: spec: hubAcceptsClient: true managedClusterClientConfigs: - url: <https://api.new-managed.dev.redhat.com> 1 1 Replace the value of the URL with the URL that provides external access to the managed cluster that you are importing. 1.7.4.1.3.2. Optional: Configuring the klusterlet to run on specific nodes You can specify which nodes you want the managed cluster klusterlet to run on by configuring the nodeSelector and tolerations annotation for the managed cluster. Complete the following steps to configure these settings: Select the managed cluster that you want to update from the clusters page in the console. Set the YAML switch to On to view the YAML content. Note: The YAML editor is only available when importing or creating a cluster. To edit the managed cluster YAML definition after importing or creating, you must use the OpenShift Container Platform command-line interface or the Red Hat Advanced Cluster Management search feature. Add the nodeSelector annotation to the managed cluster YAML definition. The key for this annotation is: open-cluster-management/nodeSelector . The value of this annotation is a string map with JSON formatting. Add the tolerations entry to the managed cluster YAML definition. The key of this annotation is: open-cluster-management/tolerations . The value of this annotation represents a toleration list with JSON formatting. The resulting YAML might resemble the following example: apiVersion: cluster.open-cluster-management.io/v1 kind: ManagedCluster metadata: annotations: open-cluster-management/nodeSelector: '{"dedicated":"acm"}' open-cluster-management/tolerations: '[{"key":"dedicated","operator":"Equal","value":"acm","effect":"NoSchedule"}]' You can also use a KlusterletConfig to configure the nodeSelector and tolerations for the managed cluster. Complete the following steps to configure these settings: Note: If you use a KlusterletConfig , the managed cluster uses the configuration in the KlusterletConfig settings instead of the settings in the managed cluster annotation. Apply the following sample YAML content. Replace value where needed: apiVersion: config.open-cluster-management.io/v1alpha1 kind: KlusterletConfig metadata: name: <klusterletconfigName> spec: nodePlacement: nodeSelector: dedicated: acm tolerations: - key: dedicated operator: Equal value: acm effect: NoSchedule Add the agent.open-cluster-management.io/klusterlet-config: `<klusterletconfigName> annotation to the managed cluster, replacing <klusterletconfigName> with the name of your KlusterletConfig . 1.7.4.1.4. Removing an imported cluster Complete the following procedure to remove an imported cluster and the open-cluster-management-agent-addon that was created on the managed cluster. On the Clusters page, click Actions > Detach cluster to remove your cluster from management. Note: If you attempt to detach the hub cluster, which is named local-cluster , be aware that the default setting of disableHubSelfManagement is false . This setting causes the hub cluster to reimport itself and manage itself when it is detached and it reconciles the MultiClusterHub controller. It might take hours for the hub cluster to complete the detachment process and reimport. If you want to reimport the hub cluster without waiting for the processes to finish, you can run the following command to restart the multiclusterhub-operator pod and reimport faster: You can change the value of the hub cluster to not import automatically by changing the disableHubSelfManagement value to true . For more information, see the disableHubSelfManagement topic. 1.7.4.1.4.1. Additional resources See Custom image pull secret for more information about how to define a custom image pull secret. See the disableHubSelfManagement topic. 1.7.4.2. Importing a managed cluster by using the CLI After you install multicluster engine for Kubernetes operator, you are ready to import a cluster and manage it by using the Red Hat OpenShift Container Platform CLI. Continue reading the following topics to learn how to import a managed cluster with the CLI by using the auto import secret, or by using manual commands. Prerequisites Supported architectures Preparing cluster import Importing a cluster by using the auto import secret Importing a cluster manually Importing the klusterlet add-on Removing an imported cluster by using the CLI Important: A hub cluster cannot manage a different hub cluster. A hub cluster is set up to automatically import and manage itself as a local cluster . You do not have to manually import a hub cluster to manage itself. If you remove a hub cluster and try to import it again, you need to add the local-cluster:true label. 1.7.4.2.1. Prerequisites A deployed hub cluster. If you are importing bare metal clusters, the hub cluster must be installed on a supported OpenShift Container Platform version. A separate cluster you want to manage. The OpenShift Container Platform CLI. See Getting started with the OpenShift CLI for information about installing and configuring the OpenShift Container Platform CLI. A defined multiclusterhub.spec.imagePullSecret if you are importing a cluster that was not created by OpenShift Container Platform. This secret might have been created when multicluster engine for Kubernetes operator was installed. See Custom image pull secret for more information about how to define this secret. Review the hub cluster KubeAPIServer certificate verification strategy to make sure that the default UseAutoDetectedCABundle strategy works. If you need to manually change the strategy, see Configuring the hub cluster KubeAPIServer verification strategy . 1.7.4.2.2. Supported architectures Linux (x86_64, s390x, ppc64le) macOS 1.7.4.2.3. Preparing for cluster import Before importing a managed cluster by using the CLI, you must complete the following steps: Log in to your hub cluster by running the following command: Run the following command on the hub cluster to create the project and namespace. The cluster name that is defined in <cluster_name> is also used as the cluster namespace in the YAML file and commands: Important: The cluster.open-cluster-management.io/managedCluster label is automatically added to and removed from a managed cluster namespace. Do not manually add it to or remove it from a managed cluster namespace. Create a file named managed-cluster.yaml with the following example content: apiVersion: cluster.open-cluster-management.io/v1 kind: ManagedCluster metadata: name: <cluster_name> labels: cloud: auto-detect vendor: auto-detect spec: hubAcceptsClient: true When the values for cloud and vendor are set to auto-detect , Red Hat Advanced Cluster Management detects the cloud and vendor types automatically from the cluster that you are importing. You can optionally replace the values for auto-detect with with the cloud and vendor values for your cluster. See the following example: cloud: Amazon vendor: OpenShift Apply the YAML file to the ManagedCluster resource by running the following command: You can now continue with either Importing the cluster by using the auto import secret or Importing the cluster manually . 1.7.4.2.4. Importing a cluster by using the auto import secret To import a managed cluster by using the auto import secret, you must create a secret that contains either a reference to the kubeconfig file of the cluster, or the kube API server and token pair of the cluster. Complete the following steps to import a cluster by using the auto import secret: Retrieve the kubeconfig file, or the kube API server and token, of the managed cluster that you want to import. See the documentation for your Kubernetes cluster to learn where to locate your kubeconfig file or your kube API server and token. Create the auto-import-secret.yaml file in the USD{CLUSTER_NAME} namespace. Create a YAML file named auto-import-secret.yaml by using content that is similar to the following template: apiVersion: v1 kind: Secret metadata: name: auto-import-secret namespace: <cluster_name> stringData: autoImportRetry: "5" # If you are using the kubeconfig file, add the following value for the kubeconfig file # that has the current context set to the cluster to import: kubeconfig: |- <kubeconfig_file> # If you are using the token/server pair, add the following two values instead of # the kubeconfig file: token: <Token to access the cluster> server: <cluster_api_url> type: Opaque Apply the YAML file in the <cluster_name> namespace by running the following command: Note: By default, the auto import secret is used one time and deleted when the import process completes. If you want to keep the auto import secret, add managedcluster-import-controller.open-cluster-management.io/keeping-auto-import-secret to the secret. You can add it by running the following command: Validate the JOINED and AVAILABLE status for your imported cluster. Run the following command from the hub cluster: Log in to the managed cluster by running the following command on the cluster: You can validate the pod status on the cluster that you are importing by running the following command: You can now continue with Importing the klusterlet add-on . 1.7.4.2.5. Importing a cluster manually Important: The import command contains pull secret information that is copied to each of the imported managed clusters. Anyone who can access the imported clusters can also view the pull secret information. Complete the following steps to import a managed cluster manually: Obtain the klusterlet-crd.yaml file that was generated by the import controller on your hub cluster by running the following command: Obtain the import.yaml file that was generated by the import controller on your hub cluster by running the following command: Proceed with the following steps in the cluster that you are importing: Log in to the managed cluster that you are importing by entering the following command: Apply the klusterlet-crd.yaml that you generated in step 1 by running the following command: Apply the import.yaml file that you previously generated by running the following command: You can validate the JOINED and AVAILABLE status for the managed cluster that you are importing by running the following command from the hub cluster: You can now continue with Importing the klusterlet add-on . 1.7.4.2.6. Importing the klusterlet add-on Implement the KlusterletAddonConfig klusterlet add-on configuration to enable other add-ons on your managed clusters. Create and apply the configuration file by completing the following steps: Create a YAML file that is similar to the following example: apiVersion: agent.open-cluster-management.io/v1 kind: KlusterletAddonConfig metadata: name: <cluster_name> namespace: <cluster_name> spec: applicationManager: enabled: true certPolicyController: enabled: true policyController: enabled: true searchCollector: enabled: true Save the file as klusterlet-addon-config.yaml . Apply the YAML by running the following command: Add-ons are installed after the managed cluster status you are importing is AVAILABLE . You can validate the pod status of add-ons on the cluster you are importing by running the following command: 1.7.4.2.7. Removing an imported cluster by using the command line interface To remove a managed cluster by using the command line interface, run the following command: Replace <cluster_name> with the name of the cluster. 1.7.4.3. Importing a managed cluster by using agent registration After you install multicluster engine for Kubernetes operator, you are ready to import a cluster and manage it by using the agent registration endpoint. Continue reading the following topics to learn how to import a managed cluster by using the agent registration endpoint. Prerequisites Supported architectures Importing a cluster 1.7.4.3.1. Prerequisites A deployed hub cluster. If you are importing bare metal clusters, the hub cluster must be installed on a supported OpenShift Container Platform version. A cluster you want to manage. The base64 command line tool. A defined multiclusterhub.spec.imagePullSecret if you are importing a cluster that was not created by OpenShift Container Platform. This secret might have been created when multicluster engine for Kubernetes operator was installed. See Custom image pull secret for more information about how to define this secret. If you need to create a new secret, see Creating a new pull secret . 1.7.4.3.2. Supported architectures Linux (x86_64, s390x, ppc64le) macOS 1.7.4.3.3. Importing a cluster To import a managed cluster by using the agent registration endpoint, complete the following steps: Get the agent registration server URL by running the following command on the hub cluster: export agent_registration_host=USD(oc get route -n multicluster-engine agent-registration -o=jsonpath="{.spec.host}") Note: If your hub cluster is using a cluster-wide-proxy, make sure that you are using the URL that managed cluster can access. Get the cacert by running the following command: oc get configmap -n kube-system kube-root-ca.crt -o=jsonpath="{.data['ca\.crt']}" > ca.crt_ Note: If you are not using the kube-root-ca issued endpoint, use the public agent-registration API endpoint CA instead of the kube-root-ca CA. Get the token for the agent registration sever to authorize by applying the following YAML content: apiVersion: v1 kind: ServiceAccount metadata: name: managed-cluster-import-agent-registration-sa namespace: multicluster-engine --- apiVersion: v1 kind: Secret type: kubernetes.io/service-account-token metadata: name: managed-cluster-import-agent-registration-sa-token namespace: multicluster-engine annotations: kubernetes.io/service-account.name: "managed-cluster-import-agent-registration-sa" --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: managedcluster-import-controller-agent-registration-client rules: - nonResourceURLs: ["/agent-registration/*"] verbs: ["get"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: managed-cluster-import-agent-registration roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: managedcluster-import-controller-agent-registration-client subjects: - kind: ServiceAccount name: managed-cluster-import-agent-registration-sa namespace: multicluster-engine Run the following command to export the token: export token=USD(oc get secret -n multicluster-engine managed-cluster-import-agent-registration-sa-token -o=jsonpath='{.data.token}' | base64 -d) Enable the automatic approval and patch the content to cluster-manager by running the following command: oc patch clustermanager cluster-manager --type=merge -p '{"spec":{"registrationConfiguration":{"featureGates":[ {"feature": "ManagedClusterAutoApproval", "mode": "Enable"}], "autoApproveUsers":["system:serviceaccount:multicluster-engine:agent-registration-bootstrap"]}}}' Note: You can also disable automatic approval and manually approve certificate signing requests from managed clusters. Switch to your managed cluster and get the cacert by running the following command: curl --cacert ca.crt -H "Authorization: Bearer USDtoken" https://USDagent_registration_host/agent-registration/crds/v1 | oc apply -f - Run the following command to import the managed cluster to the hub cluster. Replace <clusterName> with the name of you cluster. Replace <duration> with a time value. For example, 4h : Optional: Replace <klusterletconfigName> with the name of your KlusterletConfig. curl --cacert ca.crt -H "Authorization: Bearer USDtoken" https://USDagent_registration_host/agent-registration/manifests/<clusterName>?klusterletconfig=<klusterletconfigName>&duration=<duration> | oc apply -f - Note: The kubeconfig bootstrap in the klusterlet manifest does not expire if you do not set a duration. 1.7.4.4. Importing an on-premises Red Hat OpenShift Container Platform cluster manually by using central infrastructure management After you install multicluster engine for Kubernetes operator, you are ready to import a managed cluster. You can import an existing OpenShift Container Platform cluster so that you can add additional nodes. Continue reading the following topics to learn more: Prerequisites Importing a cluster Importing cluster resources 1.7.4.4.1. Prerequisites Enable the central infrastructure management feature. 1.7.4.4.2. Importing a cluster Complete the following steps to import an OpenShift Container Platform cluster manually, without a static network or a bare metal host, and prepare it for adding nodes: Create a namespace for the OpenShift Container Platform cluster that you want to import by applying the following YAML content: apiVersion: v1 kind: Namespace metadata: name: managed-cluster Make sure that a ClusterImageSet matching the OpenShift Container Platform cluster you are importing exists by applying the following YAML content: apiVersion: hive.openshift.io/v1 kind: ClusterImageSet metadata: name: openshift-v4.15 spec: releaseImage: quay.io/openshift-release-dev/ocp-release@sha256:22e149142517dfccb47be828f012659b1ccf71d26620e6f62468c264a7ce7863 Add your pull secret to access the image by applying the following YAML content: apiVersion: v1 kind: Secret type: kubernetes.io/dockerconfigjson metadata: name: pull-secret namespace: managed-cluster stringData: .dockerconfigjson: <pull-secret-json> 1 1 Replace <pull-secret-json> with your pull secret JSON. Copy the kubeconfig from your OpenShift Container Platform cluster to the hub cluster. Get the kubeconfig from your OpenShift Container Platform cluster by running the following command. Make sure that kubeconfig is set as the cluster being imported: Note: If your cluster API is accessed through a custom domain, you must first edit this kubeconfig by adding your custom certificates in the certificate-authority-data field and by changing the server field to match your custom domain. Copy the kubeconfig to the hub cluster by running the following command. Make sure that kubeconfig is set as your hub cluster: Create an AgentClusterInstall custom resource by applying the following YAML content. Replace values where needed: apiVersion: extensions.hive.openshift.io/v1beta1 kind: AgentClusterInstall metadata: name: <your-cluster-name> 1 namespace: <managed-cluster> spec: networking: userManagedNetworking: true clusterDeploymentRef: name: <your-cluster> imageSetRef: name: openshift-v4.11.18 provisionRequirements: controlPlaneAgents: 2 sshPublicKey: <""> 3 1 Choose a name for your cluster. 2 Use 1 if you are using a single-node OpenShift cluster. Use 3 if you are using a multinode cluster. 3 Add the optional sshPublicKey field to log in to nodes for troubleshooting. Create a ClusterDeployment by applying the following YAML content. Replace values where needed: apiVersion: hive.openshift.io/v1 kind: ClusterDeployment metadata: name: <your-cluster-name> 1 namespace: managed-cluster spec: baseDomain: <redhat.com> 2 installed: <true> 3 clusterMetadata: adminKubeconfigSecretRef: name: <your-cluster-name-admin-kubeconfig> 4 clusterID: <""> 5 infraID: <""> 6 clusterInstallRef: group: extensions.hive.openshift.io kind: AgentClusterInstall name: your-cluster-name-install version: v1beta1 clusterName: your-cluster-name platform: agentBareMetal: pullSecretRef: name: pull-secret 1 Choose a name for your cluster. 2 Make sure baseDomain matches the domain you are using for your OpenShift Container Platform cluster. 3 Set to true to automatically import your OpenShift Container Platform cluster as a production environment cluster. 4 Reference the kubeconfig you created in step 4. 5 6 Leave clusterID and infraID empty in production environments. Add an InfraEnv custom resource to discover new hosts to add to your cluster by applying the following YAML content. Replace values where needed: Note: The following example might require additional configuration if you are not using a static IP address. apiVersion: agent-install.openshift.io/v1beta1 kind: InfraEnv metadata: name: your-infraenv namespace: managed-cluster spec: clusterRef: name: your-cluster-name namespace: managed-cluster pullSecretRef: name: pull-secret sshAuthorizedKey: "" Table 1.5. InfraEnv field table Field Optional or required Description clusterRef Optional The clusterRef field is optional if you are using late binding. If you are not using late binding, you must add the clusterRef . sshAuthorizedKey Optional Add the optional sshAuthorizedKey field to log in to nodes for troubleshooting. If the import is successful, a URL to download an ISO file appears. Download the ISO file by running the following command, replacing <url> with the URL that appears: Note: You can automate host discovery by using bare metal host. Optional: If you want to use Red Hat Advanced Cluster Management features, such as policies, on your OpenShift Container Platform cluster, create a ManagedCluster resource. Make sure that the name of your ManagedCluster resource matches the name of your ClusterDeplpoyment resource. If you are missing the ManagedCluster resource, your cluster status is detached in the console. 1.7.4.4.3. Importing cluster resources If your OpenShift Container Platform managed cluster was installed by the Assisted Installer, you can move the managed cluster and its resources from one hub cluster to another hub cluster. You can manage a cluster from a new hub cluster by saving a copy of the original resources, applying them to the new hub cluster, and deleting the original resources. You can then scale down or scale up your managed cluster from the new hub cluster. Important: You can only scale down imported OpenShift Container Platform managed clusters if they were installed by the Assisted Installer. You can import the following resources and continue to manage your cluster with them: Table 1.6. Managed cluster resource table Resource Optional or required Description Agent Required AgentClassification Optional Required if you want to classify Agents with a filter query. AgentClusterInstall Required BareMetalHost Optional Required if you are using the baremetal platform. ClusterDeployment Required InfraEnv Required NMStateConfig Optional Required if you want to apply your network configuration on the hosts. ManagedCluster Required Secret Required The admin-kubeconfig secret is required. The bmc-secret secret is only required if you are using BareMetalHosts . 1.7.4.4.3.1. Saving and applying managed cluster resources To save a copy of your managed cluster resources and apply them to a new hub cluster, complete the following steps: Get your resources from your source hub cluster by running the following command. Replace values where needed: oc -kubeconfig <source_hub_kubeconfig> -n <managed_cluster_name> get <resource_name> <cluster_provisioning_namespace> -oyaml > <resource_name>.yaml Repeat the command for every resource you want to import by replacing <resource_name> with the name of the resource. Remove the ownerReferences property from the following resources by running the following commands: AgentClusterInstall yq --in-place -y 'del(.metadata.ownerReferences)' AgentClusterInstall.yaml Secret ( admin-kubeconfig ) yq --in-place -y 'del(.metadata.ownerReferences)' AdminKubeconfigSecret.yaml Detach the managed cluster from the source hub cluster by running the following command. Replace values where needed: oc -kubeconfig <target_hub_kubeconfig> delete ManagedCluster <cluster_name> Create a namespace on the target hub cluster for the managed cluster. Use a similar name as the source hub cluster. Apply your stored resources on the target hub cluster individually by running the following command. Replace values where needed: Note: Replace <resource_name>.yaml with . if you want to apply all the resources as a group instead of individually. oc -kubeconfig <target_hub_kubeconfig> apply -f <resource_name>.yaml 1.7.4.4.3.2. Removing the managed cluster from the source hub cluster After importing your cluster resources, remove your managed cluster from the source hub cluster by completing the following steps: Set the spec.preserveOnDelete parameter to true in the ClusterDeployment custom resource to prevent destroying the managed cluster. Complete the steps in Removing a cluster from management . 1.7.4.5. Specifying image registry on managed clusters for import You might need to override the image registry on the managed clusters that you are importing. You can do this by creating a ManagedClusterImageRegistry custom resource definition. The ManagedClusterImageRegistry custom resource definition is a namespace-scoped resource. The ManagedClusterImageRegistry custom resource definition specifies a set of managed clusters for a Placement to select, but needs different images from the custom image registry. After the managed clusters are updated with the new images, the following label is added to each managed cluster for identification: open-cluster-management.io/image-registry=<namespace>.<managedClusterImageRegistryName> . The following example shows a ManagedClusterImageRegistry custom resource definition: apiVersion: imageregistry.open-cluster-management.io/v1alpha1 kind: ManagedClusterImageRegistry metadata: name: <imageRegistryName> namespace: <namespace> spec: placementRef: group: cluster.open-cluster-management.io resource: placements name: <placementName> 1 pullSecret: name: <pullSecretName> 2 registries: 3 - mirror: <mirrored-image-registry-address> source: <image-registry-address> - mirror: <mirrored-image-registry-address> source: <image-registry-address> 1 Replace with the name of a Placement in the same namespace that selects a set of managed clusters. 2 Replace with the name of the pull secret that is used to pull images from the custom image registry. 3 List the values for each of the source and mirror registries. Replace the mirrored-image-registry-address and image-registry-address with the value for each of the mirror and source values of the registries. Example 1: To replace the source image registry named registry.redhat.io/rhacm2 with localhost:5000/rhacm2 , and registry.redhat.io/multicluster-engine with localhost:5000/multicluster-engine , use the following example: registries: - mirror: localhost:5000/rhacm2/ source: registry.redhat.io/rhacm2 - mirror: localhost:5000/multicluster-engine source: registry.redhat.io/multicluster-engine Example 2: To replace the source image, registry.redhat.io/rhacm2/registration-rhel8-operator with localhost:5000/rhacm2-registration-rhel8-operator , use the following example: registries: - mirror: localhost:5000/rhacm2-registration-rhel8-operator source: registry.redhat.io/rhacm2/registration-rhel8-operator Important: If you are importing a managed cluster by using agent registration, you must create a KlusterletConfig that contains image registries. See the following example. Replace values where needed: apiVersion: config.open-cluster-management.io/v1alpha1 kind: KlusterletConfig metadata: name: <klusterletconfigName> spec: pullSecret: namespace: <pullSecretNamespace> name: <pullSecretName> registries: - mirror: <mirrored-image-registry-address> source: <image-registry-address> - mirror: <mirrored-image-registry-address> source: <image-registry-address> See Importing a managed cluster by using the agent registration endpoint to learn more. 1.7.4.5.1. Importing a cluster that has a ManagedClusterImageRegistry Complete the following steps to import a cluster that is customized with a ManagedClusterImageRegistry custom resource definition: Create a pull secret in the namespace where you want your cluster to be imported. For these steps, the namespace is myNamespace . Create a Placement in the namespace that you created. apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Placement metadata: name: myPlacement namespace: myNamespace spec: clusterSets: - myClusterSet tolerations: - key: "cluster.open-cluster-management.io/unreachable" operator: Exists Note: The unreachable toleration is required for the Placement to be able to select the cluster. Create a ManagedClusterSet resource and bind it to your namespace. apiVersion: cluster.open-cluster-management.io/v1beta2 kind: ManagedClusterSet metadata: name: myClusterSet --- apiVersion: cluster.open-cluster-management.io/v1beta2 kind: ManagedClusterSetBinding metadata: name: myClusterSet namespace: myNamespace spec: clusterSet: myClusterSet Create the ManagedClusterImageRegistry custom resource definition in your namespace. apiVersion: imageregistry.open-cluster-management.io/v1alpha1 kind: ManagedClusterImageRegistry metadata: name: myImageRegistry namespace: myNamespace spec: placementRef: group: cluster.open-cluster-management.io resource: placements name: myPlacement pullSecret: name: myPullSecret registry: myRegistryAddress Import a managed cluster from the console and add it to a managed cluster set. Copy and run the import commands on the managed cluster after the open-cluster-management.io/image-registry=myNamespace.myImageRegistry label is added to the managed cluster. 1.7.5. Accessing your cluster To access an Red Hat OpenShift Container Platform cluster that was created and is managed, complete the following steps: From the console, navigate to Infrastructure > Clusters and select the name of the cluster that you created or want to access. Select Reveal credentials to view the user name and password for the cluster. Note these values to use when you log in to the cluster. Note: The Reveal credentials option is not available for imported clusters. Select Console URL to link to the cluster. Log in to the cluster by using the user ID and password that you found in step three. 1.7.6. Scaling managed clusters For clusters that you created, you can customize and resize your managed cluster specifications, such as virtual machine sizes and number of nodes. See the following option if you are using installer-provisioned infrastructure for cluster deployment: Scaling with MachinePool See the following options if you are using central infrastructure management for cluster deployment: Adding worker nodes to OpenShift Container Platform clusters Adding control plane nodes to managed clusters 1.7.6.1. Scaling with MachinePool For clusters that you provision with multicluster engine operator, a MachinePool resource is automatically created for you. You can further customize and resize your managed cluster specifications, such as virtual machine sizes and number of nodes, by using MachinePool . Using the MachinePool resource is not supported for bare metal clusters. A MachinePool resource is a Kubernetes resource on the hub cluster that groups the MachineSet resources together on the managed cluster. The MachinePool resource uniformly configures a set of machine resources, including zone configurations, instance type, and root storage. With MachinePool , you can manually configure the desired number of nodes or configure autoscaling of nodes on the managed cluster. 1.7.6.1.1. Configure autoscaling Configuring autoscaling provides the flexibility of your cluster to scale as needed to lower your cost of resources by scaling down when traffic is low, and by scaling up to ensure that there are enough resources when there is a higher demand for resources. To enable autoscaling on your MachinePool resources using the console, complete the following steps: In the navigation, select Infrastructure > Clusters . Click the name of your target cluster and select the Machine pools tab. From the machine pools page, select Enable autoscale from the Options menu for the target machine pool. Select the minimum and maximum number of machine set replicas. A machine set replica maps directly to a node on the cluster. The changes might take several minutes to reflect on the console after you click Scale . You can view the status of the scaling operation by clicking View machines in the notification of the Machine pools tab. To enable autoscaling on your MachinePool resources using the command line, complete the following steps: Enter the following command to view your list of machine pools, replacing managed-cluster-namespace with the namespace of your target managed cluster. Enter the following command to edit the YAML file for the machine pool: Replace MachinePool-resource-name with the name of your MachinePool resource. Replace managed-cluster-namespace with the name of the namespace of your managed cluster. Delete the spec.replicas field from the YAML file. Add the spec.autoscaling.minReplicas setting and spec.autoscaling.maxReplicas fields to the resource YAML. Add the minimum number of replicas to the minReplicas setting. Add the maximum number of replicas into the maxReplicas setting. Save the file to submit the changes. 1.7.6.1.2. Disabling autoscaling You can disable autoscaling by using the console or the command line. To disable autoscaling by using the console, complete the following steps: In the navigation, select Infrastructure > Clusters . Click the name of your target cluster and select the Machine pools tab. From the machine pools page, select Disable autoscale from the Options menu for the target machine pool. Select the number of machine set replicas that you want. A machine set replica maps directly with a node on the cluster. It might take several minutes to display in the console after you click Scale . You can view the status of the scaling by clicking View machines in the notification on the Machine pools tab. To disable autoscaling by using the command line, complete the following steps: Enter the following command to view your list of machine pools: Replace managed-cluster-namespace with the namespace of your target managed cluster. Enter the following command to edit the YAML file for the machine pool: Replace name-of-MachinePool-resource with the name of your MachinePool resource. Replace namespace-of-managed-cluster with the name of the namespace of your managed cluster. Delete the spec.autoscaling field from the YAML file. Add the spec.replicas field to the resource YAML. Add the number of replicas to the replicas setting. Save the file to submit the changes. 1.7.6.1.3. Enabling manual scaling You can scale manually from the console and from the command line. 1.7.6.1.3.1. Enabling manual scaling with the console To scale your MachinePool resources using the console, complete the following steps: Disable autoscaling for your MachinePool if it is enabled. See the steps. From the console, click Infrastructure > Clusters . Click the name of your target cluster and select the Machine pools tab. From the machine pools page, select Scale machine pool from the Options menu for the targeted machine pool. Select the number of machine set replicas that you want. A machine set replica maps directly with a node on the cluster. Changes might take several minutes to reflect on the console after you click Scale . You can view the status of the scaling operation by clicking View machines from the notification of the Machine pools tab. 1.7.6.1.3.2. Enabling manual scaling with the command line To scale your MachinePool resources by using the command line, complete the following steps: Enter the following command to view your list of machine pools, replacing <managed-cluster-namespace> with the namespace of your target managed cluster namespace: Enter the following command to edit the YAML file for the machine pool: Replace MachinePool-resource-name with the name of your MachinePool resource. Replace managed-cluster-namespace with the name of the namespace of your managed cluster. Delete the spec.autoscaling field from the YAML file. Modify the spec.replicas field in the YAML file with the number of replicas you want. Save the file to submit the changes. 1.7.6.2. Adding worker nodes to OpenShift Container Platform clusters If you are using central infrastructure management, you can customize your OpenShift Container Platform clusters by adding additional production environment nodes. Required access: Administrator Prerequisites Creating a valid kubeconfig Adding worker nodes 1.7.6.2.1. Prerequisites You must have the new CA certificates required to trust the managed cluster API. 1.7.6.2.2. Creating a valid kubeconfig Before adding production environment worker nodes to OpenShift Container Platform clusters, you must check if you have a valid kubeconfig . If the API certificates in your managed cluster changed, complete the following steps to update the kubeconfig with new CA certificates: Check if the kubeconfig for your clusterDeployment is valid by running the following commands. Replace <kubeconfig_name> with the name of your current kubeconfig and replace <cluster_name> with the name of your cluster: export <kubeconfig_name>=USD(oc get cd USD<cluster_name> -o "jsonpath={.spec.clusterMetadata.adminKubeconfigSecretRef.name}") oc extract secret/USD<kubeconfig_name> --keys=kubeconfig --to=- > original-kubeconfig oc --kubeconfig=original-kubeconfig get node If you receive the following error message, you must update your kubeconfig secret. If you receive no error message, continue to Adding worker nodes : Get the base64 encoded certificate bundle from your kubeconfig certificate-authority-data field and decode it by running the following command: echo <base64 encoded blob> | base64 --decode > decoded-existing-certs.pem Create an updated kubeconfig file by copying your original file. Run the following command and replace <new_kubeconfig_name> with the name of your new kubeconfig file: cp original-kubeconfig <new_kubeconfig_name> Append new certificates to the decoded pem by running the following command: cat decoded-existing-certs.pem new-ca-certificate.pem | openssl base64 -A Add the base64 output from the command as the value of the certificate-authority-data key in your new kubeconfig file by using a text editor. Check if the new kubeconfig is valid by querying the API with the new kubeconfig . Run the following command. Replace <new_kubeconfig_name> with the name of your new kubeconfig file: KUBECONFIG=<new_kubeconfig_name> oc get nodes If you receive a successful output, the kubeconfig is valid. Update the kubeconfig secret in the Red Hat Advanced Cluster Management hub cluster by running the following command. Replace <new_kubeconfig_name> with the name of your new kubeconfig file: oc patch secret USDoriginal-kubeconfig --type='json' -p="[{'op': 'replace', 'path': '/data/kubeconfig', 'value': 'USD(openssl base64 -A -in <new_kubeconfig_name>)'},{'op': 'replace', 'path': '/data/raw-kubeconfig', 'value': 'USD(openssl base64 -A -in <new_kubeconfig_name>)'}]" 1.7.6.2.3. Adding worker nodes If you have a valid kubeconfig , complete the following steps to add production environment worker nodes to OpenShift Container Platform clusters: Boot the machine that you want to use as a worker node from the ISO you previously downloaded. Note: Make sure that the worker node meets the requirements for an OpenShift Container Platform worker node. Wait for an agent to register after running the following command: watch -n 5 "oc get agent -n managed-cluster" If the agent registration is succesful, an agent is listed. Approve the agent for installation. This can take a few minutes. Note: If the agent is not listed, exit the watch command by pressing Ctrl and C, then log in to the worker node to troubleshoot. If you are using late binding, run the following command to associate pending unbound agents with your OpenShift Container Platform cluster. Skip to step 5 if you are not using late binding: oc get agent -n managed-cluster -ojson | jq -r '.items[] | select(.spec.approved==false) |select(.spec.clusterDeploymentName==null) | .metadata.name'| xargs oc -n managed-cluster patch -p '{"spec":{"clusterDeploymentName":{"name":"some-other-cluster","namespace":"managed-cluster"}}}' --type merge agent Approve any pending agents for installation by running the following command: oc get agent -n managed-cluster -ojson | jq -r '.items[] | select(.spec.approved==false) | .metadata.name'| xargs oc -n managed-cluster patch -p '{"spec":{"approved":true}}' --type merge agent Wait for the installation of the worker node. When the worker node installation is complete, the worker node contacts the managed cluster with a Certificate Signing Request (CSR) to start the joining process. The CSR is automatically signed. 1.7.6.3. Adding control plane nodes to managed clusters You can replace a failing control plane by adding control plane nodes to healthy or unhealthy managed clusters. Required access: Administrator 1.7.6.3.1. Adding control plane nodes to healthy managed clusters Complete the following steps to add control plane nodes to healthy managed clusters: Complete the steps in Adding worker nodes to OpenShift Container Platform clusters for your the new control plane node. If you are using the Discovery ISO to add a node, set the agent to master before you approve the agent. Run the following command: oc patch agent <AGENT-NAME> -p '{"spec":{"role": "master"}}' --type=merge Note: CSRs are not automatically approved. If you are using a BareMetalHost to add a node, add the following line to your BareMetalHost annotations when creating the BareMetalHost resource: bmac.agent-install.openshift.io/role: master Follow the steps in Installing a primary control plane node on a healthy cluster in the Assisted Installer for OpenShift Container Platform documentation 1.7.6.3.2. Adding control plane nodes to unhealthy managed clusters Complete the following steps to add control plane nodes to unhealthy managed clusters: Remove the agent for unhealthy control plane nodes. If you used the zero-touch provisioning flow for deployment, remove the bare metal host. Complete the steps in Adding worker nodes to OpenShift Container Platform clusters for your the new control plane node. Set the agent to master before you approve the agent by running the following command: oc patch agent <AGENT-NAME> -p '{"spec":{"role": "master"}}' --type=merge Note: CSRs are not automatically approved. Follow the steps in Installing a primary control plane node on an unhealthy cluster in the Assisted Installer for OpenShift Container Platform documentation 1.7.7. Hibernating a created cluster You can hibernate a cluster that was created using multicluster engine operator to conserve resources. A hibernating cluster requires significantly fewer resources than one that is running, so you can potentially lower your provider costs by moving clusters in and out of a hibernating state. This feature only applies to clusters that were created by multicluster engine operator in the following environments: Amazon Web Services Microsoft Azure Google Cloud Platform 1.7.7.1. Hibernate a cluster by using the console To use the console to hibernate a cluster that was created by multicluster engine operator, complete the following steps: From the navigation menu, select Infrastructure > Clusters . Ensure that the Manage clusters tab is selected. Select Hibernate cluster from the Options menu for the cluster. Note: If the Hibernate cluster option is not available, you cannot hibernate the cluster. This can happen when the cluster is imported, and not created by multicluster engine operator. The status for the cluster on the Clusters page is Hibernating when the process completes. Tip: You can hibernate multiple clusters by selecting the clusters that you want to hibernate on the Clusters page, and selecting Actions > Hibernate clusters . Your selected cluster is hibernating. 1.7.7.2. Hibernate a cluster by using the CLI To use the CLI to hibernate a cluster that was created by multicluster engine operator, complete the following steps: Enter the following command to edit the settings for the cluster that you want to hibernate: Replace name-of-cluster with the name of the cluster that you want to hibernate. Replace namespace-of-cluster with the namespace of the cluster that you want to hibernate. Change the value for spec.powerState to Hibernating . Enter the following command to view the status of the cluster: Replace name-of-cluster with the name of the cluster that you want to hibernate. Replace namespace-of-cluster with the namespace of the cluster that you want to hibernate. When the process of hibernating the cluster is complete, the value of the type for the cluster is type=Hibernating . Your selected cluster is hibernating. 1.7.7.3. Resuming normal operation of a hibernating cluster by using the console To resume normal operation of a hibernating cluster by using the console, complete the following steps: From the navigation menu, select Infrastructure > Clusters . Ensure that the Manage clusters tab is selected. Select Resume cluster from the Options menu for the cluster that you want to resume. The status for the cluster on the Clusters page is Ready when the process completes. Tip: You can resume multiple clusters by selecting the clusters that you want to resume on the Clusters page, and selecting Actions > Resume clusters . Your selected cluster is resuming normal operation. 1.7.7.4. Resuming normal operation of a hibernating cluster by using the CLI To resume normal operation of a hibernating cluster by using the CLI, complete the following steps: Enter the following command to edit the settings for the cluster: Replace name-of-cluster with the name of the cluster that you want to hibernate. Replace namespace-of-cluster with the namespace of the cluster that you want to hibernate. Change the value for spec.powerState to Running . Enter the following command to view the status of the cluster: Replace name-of-cluster with the name of the cluster that you want to hibernate. Replace namespace-of-cluster with the namespace of the cluster that you want to hibernate. When the process of resuming the cluster is complete, the value of the type for the cluster is type=Running . Your selected cluster is resuming normal operation. 1.7.8. Upgrading your cluster After you create Red Hat OpenShift Container Platform clusters that you want to manage with multicluster engine operator, you can use the multicluster engine operator console to upgrade those clusters to the latest minor version that is available in the version channel that the managed cluster uses. In a connected environment, the updates are automatically identified with notifications provided for each cluster that requires an upgrade in the console. 1.7.8.1. Prerequisites Verify that you meet all of the prerequisites for upgrading to that version. You must update the version channel on the managed cluster before you can upgrade the cluster with the console. Note: After you update the version channel on the managed cluster, the multicluster engine operator console displays the latest versions that are available for the upgrade. Your OpenShift Container Platform managed clusters must be in a Ready state. Important: You cannot upgrade Red Hat OpenShift Kubernetes Service managed clusters or OpenShift Container Platform managed clusters on Red Hat OpenShift Dedicated by using the multicluster engine operator console. 1.7.8.2. Upgrading your cluster in a connected environment To upgrade your cluster in a connected environment, complete the following steps: From the navigation menu, go to Infrastructure > Clusters . If an upgrade is available, it appears in the Distribution version column. Select the clusters in Ready state that you want to upgrade. You can only upgrade OpenShift Container Platform clusters in the console. Select Upgrade . Select the new version of each cluster. Select Upgrade . If your cluster upgrade fails, the Operator generally retries the upgrade a few times, stops, and reports the status of the failing component. In some cases, the upgrade process continues to cycle through attempts to complete the process. Rolling your cluster back to a version following a failed upgrade is not supported. Contact Red Hat support for assistance if your cluster upgrade fails. 1.7.8.3. Selecting a channel You can use the console to select a channel for your cluster upgrades on OpenShift Container Platform. After selecting a channel, you are automatically reminded of cluster upgrades that are available for both Errata versions and release versions. To select a channel for your cluster, complete the following steps: From the navigation, select Infrastructure > Clusters . Select the name of the cluster that you want to change to view the Cluster details page. If a different channel is available for the cluster, an edit icon is displayed in the Channel field. Click the Edit icon to change the setting in the field. Select a channel in the New channel field. You can find the reminders for the available channel updates in the Cluster details page of the cluster. 1.7.8.4. Upgrading a disconnected cluster You can use OpenShift Update Service with multicluster engine operator to upgrade clusters in a disconnected environment. In some cases, security concerns prevent clusters from being connected directly to the internet. This makes it difficult to know when upgrades are available, and how to process those upgrades. Configuring OpenShift Update Service can help. OpenShift Update Service is a separate operator and operand that monitors the available versions of your managed clusters in a disconnected environment, and makes them available for upgrading your clusters in a disconnected environment. After you configure OpenShift Update Service, it can perform the following actions: Monitor when upgrades are available for your disconnected clusters. Identify which updates are mirrored to your local site for upgrading by using the graph data file. Notify you that an upgrade is available for your cluster by using the console. The following topics explain the procedure for upgrading a disconnected cluster: Prerequisites Prepare your disconnected mirror registry Deploy the operator for OpenShift Update Service Build the graph data init container Configure certificate for the mirrored registry Deploy the OpenShift Update Service instance Override the default registry (optional) Deploy a disconnected catalog source Change the managed cluster parameter Viewing available upgrades Selecting a channel Upgrading the cluster 1.7.8.4.1. Prerequisites You must have the following prerequisites before you can use OpenShift Update Service to upgrade your disconnected clusters: A deployed hub cluster that is running on a supported OpenShift Container Platform version with restricted OLM configured. See Using Operator Lifecycle Manager on restricted networks for details about how to configure restricted OLM. Note: Make a note of the catalog source image when you configure restricted OLM. An OpenShift Container Platform cluster that is managed by the hub cluster Access credentials to a local repository where you can mirror the cluster images. See Disconnected installation mirroring for more information about how to create this repository. Note: The image for the current version of the cluster that you upgrade must always be available as one of the mirrored images. If an upgrade fails, the cluster reverts back to the version of the cluster at the time that the upgrade was attempted. 1.7.8.4.2. Prepare your disconnected mirror registry You must mirror both the image that you want to upgrade to and the current image that you are upgrading from to your local mirror registry. Complete the following steps to mirror the images: Create a script file that contains content that resembles the following example: 1 Replace /path/to/pull/secret with the path to your OpenShift Container Platform pull secret. Run the script to mirror the images, configure settings, and separate the release images from the release content. You can use the output of the last line of this script when you create your ImageContentSourcePolicy . 1.7.8.4.3. Deploy the operator for OpenShift Update Service To deploy the operator for OpenShift Update Service in your OpenShift Container Platform environment, complete the following steps: On the hub cluster, access the OpenShift Container Platform operator hub. Deploy the operator by selecting OpenShift Update Service Operator . Update the default values, if necessary. The deployment of the operator creates a new project named openshift-cincinnati . Wait for the installation of the operator to finish. You can check the status of the installation by entering the oc get pods command on your OpenShift Container Platform command line. Verify that the operator is in the running state. 1.7.8.4.4. Build the graph data init container OpenShift Update Service uses graph data information to determine the available upgrades. In a connected environment, OpenShift Update Service pulls the graph data information for available upgrades directly from the Cincinnati graph data GitHub repository . Because you are configuring a disconnected environment, you must make the graph data available in a local repository by using an init container . Complete the following steps to create a graph data init container : Clone the graph data Git repository by entering the following command: Create a file that contains the information for your graph data init . You can find this sample Dockerfile in the cincinnati-operator GitHub repository. The contents of the file is shown in the following sample: In this example: 1 The FROM value is the external registry where OpenShift Update Service finds the images. 2 3 The RUN commands create the directory and package the upgrade files. 4 The CMD command copies the package file to the local repository and extracts the files for an upgrade. Run the following commands to build the graph data init container : 1 Replace path_to_Dockerfile with the path to the file that you created in the step. 2 Replace USD{DISCONNECTED_REGISTRY}/cincinnati/cincinnati-graph-data-container with the path to your local graph data init container. 3 Replace /path/to/pull_secret with the path to your pull secret file. Note: You can also replace podman in the commands with docker , if you don't have podman installed. 1.7.8.4.5. Configure certificate for the mirrored registry If you are using a secure external container registry to store your mirrored OpenShift Container Platform release images, OpenShift Update Service requires access to this registry to build an upgrade graph. Complete the following steps to configure your CA certificate to work with the OpenShift Update Service pod: Find the OpenShift Container Platform external registry API, which is located in image.config.openshift.io . This is where the external registry CA certificate is stored. See Configuring additional trust stores for image registry access in the OpenShift Container Platform documentation for more information. Create a ConfigMap in the openshift-config namespace. Add your CA certificate under the key updateservice-registry . OpenShift Update Service uses this setting to locate your certificate: apiVersion: v1 kind: ConfigMap metadata: name: trusted-ca data: updateservice-registry: | -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- Edit the cluster resource in the image.config.openshift.io API to set the additionalTrustedCA field to the name of the ConfigMap that you created. Replace trusted-ca with the path to your new ConfigMap. The OpenShift Update Service Operator watches the image.config.openshift.io API and the ConfigMap you created in the openshift-config namespace for changes, then restart the deployment if the CA cert has changed. 1.7.8.4.6. Deploy the OpenShift Update Service instance When you finish deploying the OpenShift Update Service instance on your hub cluster, this instance is located where the images for the cluster upgrades are mirrored and made available to the disconnected managed cluster. Complete the following steps to deploy the instance: If you do not want to use the default namespace of the operator, which is openshift-cincinnati , create a namespace for your OpenShift Update Service instance: In the OpenShift Container Platform hub cluster console navigation menu, select Administration > Namespaces . Select Create Namespace . Add the name of your namespace, and any other information for your namespace. Select Create to create the namespace. In the Installed Operators section of the OpenShift Container Platform console, select OpenShift Update Service Operator . Select Create Instance in the menu. Paste the contents from your OpenShift Update Service instance. Your YAML instance might resemble the following manifest: apiVersion: cincinnati.openshift.io/v1beta2 kind: Cincinnati metadata: name: openshift-update-service-instance namespace: openshift-cincinnati spec: registry: <registry_host_name>:<port> 1 replicas: 1 repository: USD{LOCAL_REGISTRY}/ocp4/release graphDataImage: '<host_name>:<port>/cincinnati-graph-data-container' 2 1 Replace the spec.registry value with the path to your local disconnected registry for your images. 2 Replace the spec.graphDataImage value with the path to your graph data init container. This is the same value that you used when you ran the podman push command to push your graph data init container. Select Create to create the instance. From the hub cluster CLI, enter the oc get pods command to view the status of the instance creation. It might take a while, but the process is complete when the result of the command shows that the instance and the operator are running. 1.7.8.4.7. Override the default registry (optional) Note: The steps in this section only apply if you have mirrored your releases into your mirrored registry. OpenShift Container Platform has a default image registry value that specifies where it finds the upgrade packages. In a disconnected environment, you can create an override to replace that value with the path to your local image registry where you mirrored your release images. Complete the following steps to override the default registry: Create a YAML file named mirror.yaml that resembles the following content: apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: <your-local-mirror-name> 1 spec: repositoryDigestMirrors: - mirrors: - <your-registry> 2 source: registry.redhat.io 1 Replace your-local-mirror-name with the name of your local mirror. 2 Replace your-registry with the path to your local mirror repository. Note: You can find your path to your local mirror by entering the oc adm release mirror command. Using the command line of the managed cluster, run the following command to override the default registry: 1.7.8.4.8. Deploy a disconnected catalog source On the managed cluster, disable all of the default catalog sources and create a new one. Complete the following steps to change the default location from a connected location to your disconnected local registry: Create a YAML file named source.yaml that resembles the following content: apiVersion: config.openshift.io/v1 kind: OperatorHub metadata: name: cluster spec: disableAllDefaultSources: true --- apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog namespace: openshift-marketplace spec: sourceType: grpc image: '<registry_host_name>:<port>/olm/redhat-operators:v1' 1 displayName: My Operator Catalog publisher: grpc 1 Replace the value of spec.image with the path to your local restricted catalog source image. On the command line of the managed cluster, change the catalog source by running the following command: 1.7.8.4.9. Change the managed cluster parameter Update the ClusterVersion resource information on the managed cluster to change the default location from where it retrieves its upgrades. From the managed cluster, confirm that the ClusterVersion upstream parameter is currently the default public OpenShift Update Service operand by entering the following command: The returned content might resemble the following content with 4.x set as the supported version: apiVersion: v1 items: - apiVersion: config.openshift.io/v1 kind: ClusterVersion [..] spec: channel: stable-4.x upstream: https://api.openshift.com/api/upgrades_info/v1/graph From the hub cluster, identify the route URL to the OpenShift Update Service operand by entering the following command: Note the returned value for later steps. On the command line of the managed cluster, edit the ClusterVersion resource by entering the following command: Replace the value of spec.channel with your new version. Replace the value of spec.upstream with the path to your hub cluster OpenShift Update Service operand. You can complete the following steps to determine the path to your operand: Run the following command on the hub cluster: Find the path to cincinnati . The path the operand is the value in the HOST/PORT field. On the command line of the managed cluster, confirm that the upstream parameter in the ClusterVersion is updated with the local hub cluster OpenShift Update Service URL by entering the following command: The results resemble the following content: apiVersion: v1 items: - apiVersion: config.openshift.io/v1 kind: ClusterVersion [..] spec: channel: stable-4.x upstream: https://<hub-cincinnati-uri>/api/upgrades_info/v1/graph 1.7.8.4.10. Viewing available upgrades On the Clusters page, the Distribution version of the cluster indicates that there is an upgrade available, if there is an upgrade in the disconnected registry. You can view the available upgrades by selecting the cluster and selecting Upgrade clusters from the Actions menu. If the optional upgrade paths are available, the available upgrades are listed. Note: No available upgrade versions are shown if the current version is not mirrored into the local image repository. 1.7.8.4.11. Selecting a channel You can use the console to select a channel for your cluster upgrades on OpenShift Container Platform version 4.6 or later. Those versions must be available on the mirror registry. Complete the steps in Selecting a channel to specify a channel for your upgrades. 1.7.8.4.12. Upgrading the cluster After you configure the disconnected registry, multicluster engine operator and OpenShift Update Service use the disconnected registry to determine if upgrades are available. If no available upgrades are displayed, make sure that you have the release image of the current level of the cluster and at least one later level mirrored in the local repository. If the release image for the current version of the cluster is not available, no upgrades are available. On the Clusters page, the Distribution version of the cluster indicates that there is an upgrade available, if there is an upgrade in the disconnected registry. You can upgrade the image by clicking Upgrade available and selecting the version for the upgrade. The managed cluster is updated to the selected version. If your cluster upgrade fails, the Operator generally retries the upgrade a few times, stops, and reports the status of the failing component. In some cases, the upgrade process continues to cycle through attempts to complete the process. Rolling your cluster back to a version following a failed upgrade is not supported. Contact Red Hat support for assistance if your cluster upgrade fails. 1.7.9. Using cluster proxy add-ons In some environments, a managed cluster is behind a firewall and cannot be accessed directly by the hub cluster. To gain access, you can set up a proxy add-on to access the kube-apiserver of the managed cluster to provide a more secure connection. Important: There must not be a cluster-wide proxy configuration on your hub cluster. Required access: Editor To configure a cluster proxy add-on for a hub cluster and a managed cluster, complete the following steps: Configure the kubeconfig file to access the managed cluster kube-apiserver by completing the following steps: Provide a valid access token for the managed cluster. Note: : You can use the corresponding token of the service account. You can also use the default service account that is in the default namespace. Export the kubeconfig file of the managed cluster by running the following command: Add a role to your service account that allows it to access pods by running the following commands: Run the following command to locate the secret of the service account token: Replace default-token with the name of your secret. Run the following command to copy the token: Replace default-token with the name of your secret. Configure the kubeconfig file on the Red Hat Advanced Cluster Management hub cluster. Export the current kubeconfig file on the hub cluster by running the following command: Modify the server file with your editor. This example uses commands when using sed . Run alias sed=gsed , if you are using OSX. Delete the original user credentials by entering the following commands: Add the token of the service account: List all of the pods on the target namespace of the target managed cluster by running the following command: Replace the default namespace with the namespace that you want to use. Access other services on the managed cluster. This feature is available when the managed cluster is a Red Hat OpenShift Container Platform cluster. The service must use service-serving-certificate to generate server certificates: From the managed cluster, use the following service account token: From the hub cluster, convert the certificate authority to a file by running the following command: Get Prometheus metrics of the managed cluster by using the following commands: 1.7.9.1. Configuring proxy settings for cluster proxy add-ons You can configure the proxy settings for cluster proxy add-ons to allow a managed cluster to communicate with the hub cluster through a HTTP and HTTPS proxy server. You might need to configure the proxy settings if the cluster proxy add-on agent requires access to the hub cluster through the proxy server. To configure the proxy settings for the cluster proxy add-on, complete the following steps: Create an AddOnDeploymentConfig resource on your hub cluster and add the spec.proxyConfig parameter. See the following example: apiVersion: addon.open-cluster-management.io/v1alpha1 kind: AddOnDeploymentConfig metadata: name: <name> 1 namespace: <namespace> 2 spec: agentInstallNamespace: open-cluster-management-agent-addon proxyConfig: httpsProxy: "http://<username>:<password>@<ip>:<port>" 3 noProxy: ".cluster.local,.svc,172.30.0.1" 4 caBundle: <value> 5 1 Add your add-on deployment config name. 2 Add your managed cluster name. 3 Specify either a HTTP proxy or a HTTPS proxy. 4 Add the IP address of the kube-apiserver . To get the IP address, run following command on your managed cluster: oc -n default describe svc kubernetes | grep IP: 5 If you specify a HTTPS proxy in the httpsProxy field, set the proxy server CA bundle. Update the ManagedClusterAddOn resource by referencing the AddOnDeploymentConfig resource that you created. See the following example: apiVersion: addon.open-cluster-management.io/v1alpha1 kind: ManagedClusterAddOn metadata: name: cluster-proxy namespace: <namespace> 1 spec: installNamespace: open-cluster-management-addon configs: group: addon.open-cluster-management.io resource: addondeploymentconfigs name: <name> 2 namespace: <namespace> 3 1 Add your managed cluster name. 2 Add your add-on deployment config name. 3 Add your managed cluster name. Verify the proxy settings by checking if the cluster proxy agent pod in the open-cluster-management-addon namespace has HTTPS_PROXY or NO_PROXY environment variables on the managed cluster. 1.7.10. Configuring Ansible Automation Platform tasks to run on managed clusters multicluster engine operator is integrated with Red Hat Ansible Automation Platform so that you can create prehook and posthook Ansible job instances that occur before or after creating or upgrading your clusters. Configuring prehook and posthook jobs for cluster destroy, and cluster scale actions are not supported. Required access: Cluster administrator Prerequisites Configuring an Automation template to run on a cluster by using the console Creating an Automation template Viewing the status of an Ansible job Pushing custom labels from the ClusterCurator resource to the automation job pod Using the ClusterCurator for Extended Update Support (EUS) upgrades 1.7.10.1. Prerequisites You must meet the following prerequisites to run Automation templates on your clusters: Install OpenShift Container Platform. Install the Ansible Automation Platform Resource Operator to connect Ansible jobs to the lifecycle of Git subscriptions. For best results when using the Automation template to launch Ansible Automation Platform jobs, the Ansible Automation Platform job template should be idempotent when it is run. You can find the Ansible Automation Platform Resource Operator in the OpenShift Container Platform OperatorHub . When installing the Ansible Automation Platform Resource Operator, you must select the *-cluster-scoped channel and select the all namespaces installation mode. 1.7.10.2. Configuring an Automation template to run on a cluster by using the console You can specify the Automation template that you want to use for a cluster when you create the cluster, when you import the cluster, or after you create the cluster. To specify the template when creating or importing a cluster, select the Ansible template that you want to apply to the cluster in the Automation step. If there are no Automation templates, click Add automation template to create one. To specify the template after creating a cluster, click Update automation template in the action menu of an existing cluster. You can also use the Update automation template option to update an existing automation template. 1.7.10.3. Creating an Automation template To initiate an Ansible job with a cluster installation or upgrade, you must create an Automation template to specify when you want the jobs to run. They can be configured to run before or after the cluster installs or upgrades. To specify the details about running the Ansible template while creating a template, complete the steps in the console: Select Infrastructure > Automation from the navigation. Select the applicable path for your situation: If you want to create a new template, click Create Ansible template and continue with step 3. If you want to modify an existing template, click Edit template from the Options menu of the template that you want to modify and continue with step 5. Enter a unique name for your template, which contains lowercase alphanumeric characters or a hyphen (-). Select the credential that you want to use for the new template. After you select a credential, you can select an Ansible inventory to use for all the jobs. To link an Ansible credential to an Ansible template, complete the following steps: From the navigation, select Automation . Any template in the list of templates that is not linked to a credential contains a Link to credential icon that you can use to link the template to an existing credential. Only the credentials in the same namespace as the template are displayed. If there are no credentials that you can select, or if you do not want to use an existing credential, select Edit template from the Options menu for the template that you want to link. Click Add credential and complete the procedure in Creating a credential for Ansible Automation Platform if you have to create your credential. After you create your credential in the same namespace as the template, select the credential in the Ansible Automation Platform credential field when you edit the template. If you want to initiate any Ansible jobs before the cluster is installed, select Add an Automation template in the Pre-install Automation templates section. Select between a Job template or a Workflow job template in the modal that appears. You can also add job_tags , skip_tags , and workflow types. Use the Extra variables field to pass data to the AnsibleJob resource in the form of key=value pairs. Special keys cluster_deployment and install_config are passed automatically as extra variables. They contain general information about the cluster and details about the cluster installation configuration. Select the name of the prehook and posthook Ansible jobs to add to the installation or upgrade of the cluster. Drag the Ansible jobs to change the order, if necessary. Repeat steps 5 - 7 for any Automation templates that you want to initiate after the cluster is installed in the Post-install Automation templates section, the Pre-upgrade Automation templates section, and the Post-upgrade Automation templates section. When upgrading a cluster, you can use the Extra variables field to pass data to the AnsibleJob resource in the form of key=value pairs. In addition to the cluster_deployment and install_config special keys, the cluster_info special key is also passed automatically as an extra variable containing data from the ManagedClusterInfo resource. Your Ansible template is configured to run on clusters that specify this template when the designated actions occur. 1.7.10.4. Viewing the status of an Ansible job You can view the status of a running Ansible job to ensure that it started, and is running successfully. To view the current status of a running Ansible job, complete the following steps: In the menu, select Infrastructure > Clusters to access the Clusters page. Select the name of the cluster to view its details. View the status of the last run of the Ansible job on the cluster information. The entry shows one of the following statuses: When an install prehook or posthook job fails, the cluster status shows Failed . When an upgrade prehook or posthook job fails, a warning is displayed in the Distribution field that the upgrade failed. 1.7.10.5. Running a failed Ansible job again You can retry an upgrade from the Clusters page if the cluster prehook or posthook failed. To save time, you can also run only the failed Ansible posthooks that are part of cluster automation templates. Complete the following steps to run only the posthooks again, without retrying the entire upgrade: Add the following content to the root of the ClusterCurator resource to run the install posthook again: operation: retryPosthook: installPosthook Add the following content to the root of the ClusterCurator resource to run the upgrade posthook again: operation: retryPosthook: upgradePosthook After adding the content, a new job is created to run the Ansible posthook. 1.7.10.6. Specifying an Ansible inventory to use for all jobs You can use the ClusterCurator resource to specify an Ansible inventory to use for all jobs. See the following example. Replace channel and desiredUpdate with the correct values for your ClusterCurator : apiVersion: cluster.open-cluster-management.io/v1beta1 kind: ClusterCurator metadata: name: test-inno namespace: test-inno spec: desiredCuration: upgrade destroy: {} install: {} scale: {} upgrade: channel: stable-4.x desiredUpdate: 4.x.1 monitorTimeout: 150 posthook: - extra_vars: {} clusterName: test-inno type: post_check name: ACM Upgrade Checks prehook: - extra_vars: {} clusterName: test-inno type: pre_check name: ACM Upgrade Checks towerAuthSecret: awx inventory: Demo Inventory Note: To use the example resource, the inventory must already exist in Ansible. You can verify that the inventory is created by checking the list of available Ansible inventories from the console. 1.7.10.7. Pushing custom labels from the ClusterCurator resource to the automation job pod You can use the ClusterCurator resource to push custom labels to the automation job pod created by the Cluster Curator. You can push the custom labels on all curation types. See the following example: apiVersion: cluster.open-cluster-management.io/v1beta1 kind: ClusterCurator metadata: name: cluster1 {{{} namespace: cluster1 labels: test1: test1 test2: test2 {}}}spec: desiredCuration: install install: jobMonitorTimeout: 5 posthook: - extra_vars: {} name: Demo Job Template type: Job prehook: - extra_vars: {} name: Demo Job Template type: Job towerAuthSecret: toweraccess 1.7.10.8. Using the ClusterCurator for Extended Update Support (EUS) upgrades You can use the ClusterCurator resource to perform an easier, automatic upgrade between EUS releases. Add spec.upgrade.intermediateUpdate to the ClusterCurator resource with the intermediate release value. See the following sample, where the intermediate release is 4.14.x , and the desiredUpdate is 4.15.x : spec: desiredCuration: upgrade upgrade: intermediateUpdate: 4.14.x desiredUpdate: 4.15.x monitorTimeout: 120 Optional: You can pause the machineconfigpools to skip the intermediate release for faster upgrade. Enter Unpause machinepool in the posthook job, and pause machinepool in the prehook job. See the following example: posthook: - extra_vars: {} name: Unpause machinepool type: Job prehook: - extra_vars: {} name: Pause machinepool type: Job See the following full example of the ClusterCurator that is configured to upgrade EUS to EUS: apiVersion: cluster.open-cluster-management.io/v1beta1 kind: ClusterCurator metadata: annotations: cluster.open-cluster-management.io/upgrade-clusterversion-backoff-limit: "10" name: your-name namespace: your-namespace spec: desiredCuration: upgrade upgrade: intermediateUpdate: 4.14.x desiredUpdate: 4.15.x monitorTimeout: 120 posthook: - extra_vars: {} name: Unpause machinepool type: Job prehook: - extra_vars: {} name: Pause machinepool type: Job 1.7.11. Configuring Ansible Automation Platform jobs to run on hosted clusters Red Hat Ansible Automation Platform is integrated with multicluster engine operator so that you can create prehook and posthook Ansible Automation Platform job instances that occur before or after you create or update hosted clusters. Required access: Cluster administrator Prerequisites Running an Ansible Automation Platform job to install a hosted cluster Running an Ansible Automation Platform job to update a hosted cluster Running an Ansible Automation Platform job to delete a hosted cluster 1.7.11.1. Prerequisites You must meet the following prerequisites to run Automation templates on your clusters: A supported version of OpenShift Container Platform Install the Ansible Automation Platform Resource Operator to connect Ansible Automation Platform jobs to the lifecycle of Git subscriptions. When you use the Automation template to start Ansible Automation Platform jobs, ensure that the Ansible Automation Platform job template is idempotent when it is run. You can find the Ansible Automation Platform Resource Operator in the OpenShift Container Platform OperatorHub . 1.7.11.2. Running an Ansible Automation Platform job to install a hosted cluster To start an Ansible Automation Platform job that installs a hosted cluster, complete the following steps: Create the HostedCluster and NodePool resources, including the pausedUntil: true field. If you use the hcp create cluster command line interface command, you can specify the --pausedUntil: true flag. See the following examples: apiVersion: hypershift.openshift.io/v1beta1 kind: HostedCluster metadata: name: my-cluster namespace: clusters spec: pausedUntil: 'true' apiVersion: hypershift.openshift.io/v1beta1 kind: NodePool metadata: name: my-cluster-us-east-2 namespace: clusters spec: pausedUntil: 'true' Create a ClusterCurator resource with the same name as the HostedCluster resource and in the same namespace as the HostedCluster resource. See the following example: apiVersion: cluster.open-cluster-management.io/v1beta1 kind: ClusterCurator metadata: name: my-cluster namespace: clusters labels: open-cluster-management: curator spec: desiredCuration: install install: jobMonitorTimeout: 5 prehook: - name: Demo Job Template extra_vars: variable1: something-interesting variable2: 2 - name: Demo Job Template posthook: - name: Demo Job Template towerAuthSecret: toweraccess If your Ansible Automation Platform Tower requires authentication, create a secret resource. See the following example: apiVersion: v1 kind: Secret metadata: name: toweraccess namespace: clusters stringData: host: https://my-tower-domain.io token: ANSIBLE_TOKEN_FOR_admin 1.7.11.3. Running an Ansible Automation Platform job to update a hosted cluster To run an Ansible Automation Platform job that updates a hosted cluster, edit the ClusterCurator resource of the hosted cluster that you want to update. See the following example: apiVersion: cluster.open-cluster-management.io/v1beta1 kind: ClusterCurator metadata: name: my-cluster namespace: clusters labels: open-cluster-management: curator spec: desiredCuration: upgrade upgrade: desiredUpdate: 4.15.1 1 monitorTimeout: 120 prehook: - name: Demo Job Template extra_vars: variable1: something-interesting variable2: 2 - name: Demo Job Template posthook: - name: Demo Job Template towerAuthSecret: toweraccess 1 For details about supported versions, see Hosted control planes . Note: When you update a hosted cluster in this way, you update both the hosted control plane and the node pools to the same version. Updating the hosted control planes and node pools to different versions is not supported. 1.7.11.4. Running an Ansible Automation Platform job to delete a hosted cluster To run an Ansible Automation Platform job that deletes a hosted cluster, edit the ClusterCurator resource of the hosted cluster that you want to delete. See the following example: apiVersion: cluster.open-cluster-management.io/v1beta1 kind: ClusterCurator metadata: name: my-cluster namespace: clusters labels: open-cluster-management: curator spec: desiredCuration: destroy destroy: jobMonitorTimeout: 5 prehook: - name: Demo Job Template extra_vars: variable1: something-interesting variable2: 2 - name: Demo Job Template posthook: - name: Demo Job Template towerAuthSecret: toweraccess Note: Deleting a hosted cluster on AWS is not supported. 1.7.11.5. Additional resources For more information about the hosted control plane command line interface, hcp , see Installing the hosted control planes command-line interface . For more information about hosted clusters, including supported versions, see Introduction to hosted control planes . 1.7.12. ClusterClaims A ClusterClaim is a cluster-scoped custom resource definition (CRD) on a managed cluster. A ClusterClaim represents a piece of information that a managed cluster claims. You can use the ClusterClaim to determine the Placement of the resource on the target clusters. The following example shows a ClusterClaim that is identified in the YAML file: apiVersion: cluster.open-cluster-management.io/v1alpha1 kind: ClusterClaim metadata: name: id.openshift.io spec: value: 95f91f25-d7a2-4fc3-9237-2ef633d8451c The following table shows the defined ClusterClaim list for a cluster that multicluster engine operator manages: Claim name Reserved Mutable Description id.k8s.io true false ClusterID defined in upstream proposal kubeversion.open-cluster-management.io true true Kubernetes version platform.open-cluster-management.io true false Platform the managed cluster is running on, such as AWS, GCE, and Equinix Metal product.open-cluster-management.io true false Product name, such as OpenShift, Anthos, EKS and GKE id.openshift.io false false OpenShift Container Platform external ID, which is only available for an OpenShift Container Platform cluster consoleurl.openshift.io false true URL of the management console, which is only available for an OpenShift Container Platform cluster version.openshift.io false true OpenShift Container Platform version, which is only available for an OpenShift Container Platform cluster If any of the claims are deleted or updated on managed cluster, they are restored or rolled back to a version automatically. After the managed cluster joins the hub, any ClusterClaim that is created on a managed cluster is synchronized with the status of the ManagedCluster resource on the hub cluster. See the following example of clusterClaims for a ManagedCluster , replacing 4.x with a supported version of OpenShift Container Platform: apiVersion: cluster.open-cluster-management.io/v1 kind: ManagedCluster metadata: labels: cloud: Amazon clusterID: 95f91f25-d7a2-4fc3-9237-2ef633d8451c installer.name: multiclusterhub installer.namespace: open-cluster-management name: cluster1 vendor: OpenShift name: cluster1 spec: hubAcceptsClient: true leaseDurationSeconds: 60 status: allocatable: cpu: '15' memory: 65257Mi capacity: cpu: '18' memory: 72001Mi clusterClaims: - name: id.k8s.io value: cluster1 - name: kubeversion.open-cluster-management.io value: v1.18.3+6c42de8 - name: platform.open-cluster-management.io value: AWS - name: product.open-cluster-management.io value: OpenShift - name: id.openshift.io value: 95f91f25-d7a2-4fc3-9237-2ef633d8451c - name: consoleurl.openshift.io value: 'https://console-openshift-console.apps.xxxx.dev04.red-chesterfield.com' - name: version.openshift.io value: '4.x' conditions: - lastTransitionTime: '2020-10-26T07:08:49Z' message: Accepted by hub cluster admin reason: HubClusterAdminAccepted status: 'True' type: HubAcceptedManagedCluster - lastTransitionTime: '2020-10-26T07:09:18Z' message: Managed cluster joined reason: ManagedClusterJoined status: 'True' type: ManagedClusterJoined - lastTransitionTime: '2020-10-30T07:20:20Z' message: Managed cluster is available reason: ManagedClusterAvailable status: 'True' type: ManagedClusterConditionAvailable version: kubernetes: v1.18.3+6c42de8 1.7.12.1. Create custom ClusterClaims You can create a ClusterClaim resource with a custom name on a managed cluster, which makes it easier to identify. The custom ClusterClaim resource is synchronized with the status of the ManagedCluster resource on the hub cluster. The following content shows an example of a definition for a customized ClusterClaim resource: apiVersion: cluster.open-cluster-management.io/v1alpha1 kind: ClusterClaim metadata: name: <custom_claim_name> spec: value: <custom_claim_value> The length of spec.value field must be 1024 or less. The create permission on resource clusterclaims.cluster.open-cluster-management.io is required to create a ClusterClaim resource. 1.7.12.2. List existing ClusterClaims You can use the kubectl command to list the ClusterClaims that apply to your managed cluster so that you can compare your ClusterClaim to an error message. Note: Make sure you have list permission on resource clusterclaims.cluster.open-cluster-management.io . Run the following command to list all existing ClusterClaims that are on the managed cluster: 1.7.13. ManagedClusterSets A ManagedClusterSet is a group of managed clusters. A managed cluster set, can help you manage access to all of your managed clusters. You can also create a ManagedClusterSetBinding resource to bind a ManagedClusterSet resource to a namespace. Each cluster must be a member of a managed cluster set. When you install the hub cluster, a ManagedClusterSet resource is created called default . All clusters that are not assigned to a managed cluster set are automatically assigned to the default managed cluster set. You cannot delete or update the default managed cluster set. Continue reading to learn more about how to create and manage managed cluster sets: Creating a ManagedClusterSet Assigning RBAC permissions to ManagedClusterSets Creating a ManagedClusterSetBinding resource Removing a cluster from a ManagedClusterSet 1.7.13.1. Creating a ManagedClusterSet You can group managed clusters together in a managed cluster set to limit the user access on managed clusters. Required access: Cluster administrator A ManagedClusterSet is a cluster-scoped resource, so you must have cluster administration permissions for the cluster where you are creating the ManagedClusterSet . A managed cluster cannot be included in more than one ManagedClusterSet . You can create a managed cluster set from either the multicluster engine operator console or from the CLI. Note: Cluster pools that are not added to a managed cluster set are not added to the default ManagedClusterSet resource. After a cluster is claimed from the cluster pool, the cluster is added to the default ManagedClusterSet . When you create a managed cluster, the following are automatically created to ease management: A ManagedClusterSet called global . The namespace called open-cluster-management-global-set . A ManagedClusterSetBinding called global to bind the global ManagedClusterSet to the open-cluster-management-global-set namespace. Important: You cannot delete, update, or edit the global managed cluster set. The global managed cluster set includes all managed clusters. See the following example: apiVersion: cluster.open-cluster-management.io/v1beta2 kind: ManagedClusterSetBinding metadata: name: global namespace: open-cluster-management-global-set spec: clusterSet: global 1.7.13.1.1. Creating a ManagedClusterSet by using the CLI Add the following definition of the managed cluster set to your YAML file to create a managed cluster set by using the CLI: apiVersion: cluster.open-cluster-management.io/v1beta2 kind: ManagedClusterSet metadata: name: <cluster_set> Replace <cluster_set> with the name of your managed cluster set. 1.7.13.1.2. Adding a cluster to a ManagedClusterSet After you create your ManagedClusterSet , you can add clusters to your managed cluster set by either following the instructions in the console or by using the CLI. 1.7.13.1.3. Adding clusters to a ManagedClusterSet by using the CLI Complete the following steps to add a cluster to a managed cluster set by using the CLI: Ensure that there is an RBAC ClusterRole entry that allows you to create on a virtual subresource of managedclustersets/join . Note: Without this permission, you cannot assign a managed cluster to a ManagedClusterSet . If this entry does not exist, add it to your YAML file. See the following example: kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: clusterrole1 rules: - apiGroups: ["cluster.open-cluster-management.io"] resources: ["managedclustersets/join"] resourceNames: ["<cluster_set>"] verbs: ["create"] Replace <cluster_set> with the name of your ManagedClusterSet . Note: If you are moving a managed cluster from one ManagedClusterSet to another, you must have that permission available on both managed cluster sets. Find the definition of the managed cluster in the YAML file. See the following example definition: apiVersion: cluster.open-cluster-management.io/v1 kind: ManagedCluster metadata: name: <cluster_name> spec: hubAcceptsClient: true Add the cluster.open-cluster-management.io/clusterset paremeter and specify the name of the ManagedClusterSet . See the following example: apiVersion: cluster.open-cluster-management.io/v1 kind: ManagedCluster metadata: name: <cluster_name> labels: cluster.open-cluster-management.io/clusterset: <cluster_set> spec: hubAcceptsClient: true 1.7.13.2. Assigning RBAC permissions to a ManagedClusterSet You can assign users or groups to your cluster set that are provided by the configured identity providers on the hub cluster. Required access: Cluster administrator See the following table for the three ManagedClusterSet API RBAC permission levels: Cluster set Access permissions Create permissions admin Full access permission to all of the cluster and cluster pool resources that are assigned to the managed cluster set. Permission to create clusters, import clusters, and create cluster pools. The permissions must be assigned to the managed cluster set when it is created. bind Permission to bind the cluster set to a namespace by creating a ManagedClusterSetBinding . The user or group must also have permission to create the ManagedClusterSetBinding in the target namespace. Read only permissions to all of the cluster and cluster pool resources that are assigned to the managed cluster set. No permission to create clusters, import clusters, or create cluster pools. view Read only permission to all of the cluster and cluster pool resources that are assigned to the managed cluster set. No permission to create clusters, import clusters, or create cluster pools. Note: You cannot apply the Cluster set admin permission for the global cluster set. Complete the following steps to assign users or groups to your managed cluster set from the console: From the OpenShift Container Platform console, navigate to Infrastructure > Clusters . Select the Cluster sets tab. Select your target cluster set. Select the Access management tab. Select Add user or group . Search for, and select the user or group that you want to provide access. Select the Cluster set admin or Cluster set view role to give to the selected user or user group. See Overview of roles in multicluster engine operator Role-based access control for more information. Select Add to submit the changes. Your user or group is displayed in the table. It might take a few seconds for the permission assignments for all of the managed cluster set resources to be propagated to your user or group. See Filtering ManagedClusters from ManagedCusterSets for placement information. 1.7.13.3. Creating a ManagedClusterSetBinding resource A ManagedClusterSetBinding resource binds a ManagedClusterSet resource to a namespace. Applications and policies that are created in the same namespace can only access clusters that are included in the bound managed cluster set resource. Access permissions to the namespace automatically apply to a managed cluster set that is bound to that namespace. If you have access permissions to that namespace, you automatically have permissions to access any managed cluster set that is bound to that namespace. If you only have permissions to access the managed cluster set, you do not automatically have permissions to access other managed cluster sets on the namespace. You can create a managed cluster set binding by using the console or the command line. 1.7.13.3.1. Creating a ManagedClusterSetBinding by using the console Complete the following steps to create a ManagedClusterSetBinding by using the console: From the OpenShift Container Platform console, navigate to Infrastructure > Clusters and select the Cluster sets tab. Select the name of the cluster set that you want to create a binding for. Navigate to Actions > Edit namespace bindings . On the Edit namespace bindings page, select the namespace to which you want to bind the cluster set from the drop-down menu. 1.7.13.3.2. Creating a ManagedClusterSetBinding by using the CLI Complete the following steps to create a ManagedClusterSetBinding by using the CLI: Create the ManagedClusterSetBinding resource in your YAML file. Note: When you create a managed cluster set binding, the name of the managed cluster set binding must match the name of the managed cluster set to bind. Your ManagedClusterSetBinding resource might resemble the following information: apiVersion: cluster.open-cluster-management.io/v1beta2 kind: ManagedClusterSetBinding metadata: namespace: <namespace> name: <cluster_set> spec: clusterSet: <cluster_set> Ensure that you have the bind permission on the target managed cluster set. View the following example of a ClusterRole resource, which contains rules that allow the user to bind to <cluster_set> : apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: <clusterrole> rules: - apiGroups: ["cluster.open-cluster-management.io"] resources: ["managedclustersets/bind"] resourceNames: ["<cluster_set>"] verbs: ["create"] 1.7.13.4. Placing managed clusters by using taints and tolerations You can control the placement of your managed clusters or managed cluster sets by using taints and tolerations. Taints and tolerations provide a way to prevent managed clusters from being selected for certain placements. This control can be helpful if you want to prevent certain managed clusters from being included in some placements. You can add a taint to the managed cluster, and add a toleration to the placement. If the taint and the toleration do not match, then the managed cluster is not selected for that placement. 1.7.13.4.1. Adding a taint to a managed cluster Taints are specified in the properties of a managed cluster and allow a placement to repel a managed cluster or a set of managed clusters. If the taints section does not exist, you can add a taint to a managed cluster by running a command that resembles the following example: oc patch managedcluster <managed_cluster_name> -p '{"spec":{"taints":[{"key": "key", "value": "value", "effect": "NoSelect"}]}}' --type=merge Alternatively, you can append a taint to existing taints by running a command similar to the following example: oc patch managedcluster <managed_cluster_name> --type='json' -p='[{"op": "add", "path": "/spec/taints/-", "value": {"key": "key", "value": "value", "effect": "NoSelect"}}]' The specification of a taint includes the following fields: Required Key - The taint key that is applied to a cluster. This value must match the value in the toleration for the managed cluster to meet the criteria for being added to that placement. You can determine this value. For example, this value could be bar or foo.example.com/bar . Optional Value - The taint value for the taint key. This value must match the value in the toleration for the managed cluster to meet the criteria for being added to that placement. For example, this value could be value . Required Effect - The effect of the taint on placements that do not tolerate the taint, or what occurs when the taint and the toleration of the placement do not match. The value of the effects must be one of the following values: NoSelect - Placements are not allowed to select a cluster unless they tolerate this taint. If the cluster was selected by the placement before the taint was set, the cluster is removed from the placement decision. NoSelectIfNew - The scheduler cannot select the cluster if it is a new cluster. Placements can only select the cluster if they tolerate the taint and already have the cluster in their cluster decisions. Required TimeAdded - The time when the taint was added. This value is automatically set. 1.7.13.4.2. Identifying built-in taints to reflect the status of managed clusters When a managed cluster is not accessible, you do not want the cluster added to a placement. The following taints are automatically added to managed clusters that are not accessible: cluster.open-cluster-management.io/unavailable - This taint is added to a managed cluster when the cluster has a condition of ManagedClusterConditionAvailable with status of False . The taint has the effect of NoSelect and an empty value to prevent an unavailable cluster from being scheduled. An example of this taint is provided in the following content: apiVersion: cluster.open-cluster-management.io/v1 kind: ManagedCluster metadata: name: cluster1 spec: hubAcceptsClient: true taints: - effect: NoSelect key: cluster.open-cluster-management.io/unavailable timeAdded: '2022-02-21T08:11:54Z' cluster.open-cluster-management.io/unreachable - This taint is added to a managed cluster when the status of the condition for ManagedClusterConditionAvailable is either Unknown or has no condition. The taint has effect of NoSelect and an empty value to prevent an unreachable cluster from being scheduled. An example of this taint is provided in the following content: apiVersion: cluster.open-cluster-management.io/v1 kind: ManagedCluster metadata: name: cluster1 spec: hubAcceptsClient: true taints: - effect: NoSelect key: cluster.open-cluster-management.io/unreachable timeAdded: '2022-02-21T08:11:06Z' 1.7.13.4.3. Adding a toleration to a placement Tolerations are applied to placements, and allow the placements to repel managed clusters that do not have taints that match the tolerations of the placement. The specification of a toleration includes the following fields: Optional Key - The key matches the taint key to allow the placement. Optional Value - The value in the toleration must match the value of the taint for the toleration to allow the placement. Optional Operator - The operator represents the relationship between a key and a value. Valid operators are equal and exists . The default value is equal . A toleration matches a taint when the keys are the same, the effects are the same, and the operator is one of the following values: equal - The operator is equal and the values are the same in the taint and the toleration. exists - The wildcard for value, so a placement can tolerate all taints of a particular category. Optional Effect - The taint effect to match. When left empty, it matches all taint effects. The allowed values when specified are NoSelect or NoSelectIfNew . Optional TolerationSeconds - The length of time, in seconds, that the toleration tolerates the taint before moving the managed cluster to a new placement. If the effect value is not NoSelect or PreferNoSelect , this field is ignored. The default value is nil , which indicates that there is no time limit. The starting time of the counting of the TolerationSeconds is automatically listed as the TimeAdded value in the taint, rather than in the value of the cluster scheduled time or the TolerationSeconds added time. The following example shows how to configure a toleration that tolerates clusters that have taints: Taint on the managed cluster for this example: apiVersion: cluster.open-cluster-management.io/v1 kind: ManagedCluster metadata: name: cluster1 spec: hubAcceptsClient: true taints: - effect: NoSelect key: gpu value: "true" timeAdded: '2022-02-21T08:11:06Z' Toleration on the placement that allows the taint to be tolerated apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Placement metadata: name: placement1 namespace: default spec: tolerations: - key: gpu value: "true" operator: Equal With the example tolerations defined, cluster1 could be selected by the placement because the key: gpu and value: "true" match. Note: A managed cluster is not guaranteed to be placed on a placement that contains a toleration for the taint. If other placements contain the same toleration, the managed cluster might be placed on one of those placements. 1.7.13.4.4. Specifying a temporary toleration The value of TolerationSeconds specifies the period of time that the toleration tolerates the taint. This temporary toleration can be helpful when a managed cluster is offline and you can transfer applications that are deployed on this cluster to another managed cluster for a tolerated time. For example, the managed cluster with the following taint becomes unreachable: apiVersion: cluster.open-cluster-management.io/v1 kind: ManagedCluster metadata: name: cluster1 spec: hubAcceptsClient: true taints: - effect: NoSelect key: cluster.open-cluster-management.io/unreachable timeAdded: '2022-02-21T08:11:06Z' If you define a placement with a value for TolerationSeconds , as in the following example, the workload transfers to another available managed cluster after 5 minutes. apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Placement metadata: name: demo4 namespace: demo1 spec: tolerations: - key: cluster.open-cluster-management.io/unreachable operator: Exists tolerationSeconds: 300 The application is moved to another managed cluster after the managed cluster is unreachable for 5 minutes. 1.7.13.4.5. Additional resources To learn more about taints and tolerations, see Using taints and tolerations to control logging pod placement in the OpenShift Container Platform documentation. To learn how to use oc patch , see oc patch the OpenShift Container Platform documentation 1.7.13.5. Removing a managed cluster from a ManagedClusterSet You might want to remove a managed cluster from a managed cluster set to move it to a different managed cluster set, or remove it from the management settings of the set. You can remove a managed cluster from a managed cluster set by using the console or the CLI. Notes: Every managed cluster must be assigned to a managed cluster set. If you remove a managed cluster from a ManagedClusterSet and do not assign it to a different ManagedClusterSet , the cluster is automatically added to the default managed cluster set. If the Submariner add-on is installed on your managed cluster, you must uninstall the add-on before removing your managed cluster from a ManagedClusterSet . 1.7.13.5.1. Removing a cluster from a ManagedClusterSet by using the console Complete the following steps to remove a cluster from a managed cluster set by using the console: Click Infrastructure > Clusters and ensure that the Cluster sets tab is selected. Select the name of the cluster set that you want to remove from the managed cluster set to view the cluster set details. Select Actions > Manage resource assignments . On the Manage resource assignments page, remove the checkbox for the resources that you want to remove from the cluster set. This step removes a resource that is already a member of the cluster set. You can see if the resource is already a member of a cluster set by viewing the details of the managed cluster. Note: If you are moving a managed cluster from one managed cluster set to another, you must have the required RBAC permissions on both managed cluster sets. 1.7.13.5.2. Removing a cluster from a ManagedClusterSet by using the CLI To remove a cluster from a managed cluster set by using the command line, complete the following steps: Run the following command to display a list of managed clusters in the managed cluster set: Replace cluster_set with the name of the managed cluster set. Locate the entry for the cluster that you want to remove. Remove the label from the YAML entry for the cluster that you want to remove. See the following code for an example of the label: labels: cluster.open-cluster-management.io/clusterset: clusterset1 Note: If you are moving a managed cluster from one cluster set to another, you must have the required RBAC permission on both managed cluster sets. 1.7.14. Placement A placement resource is a namespace-scoped resource that defines a rule to select a set of ManagedClusters from the ManagedClusterSets , which are bound to the placement namespace. Required access: Cluster administrator, Cluster set administrator Continue reading to learn more about how to use placements: Placement overview Selecting ManagedClusters from ManagedCusterSets Checking selected ManagedClusters by using PlacementDecisions 1.7.14.1. Placement overview See the following information about how placement with managed clusters works: Kubernetes clusters are registered with the hub cluster as cluster-scoped ManagedClusters . The ManagedClusters are organized into cluster-scoped ManagedClusterSets . The ManagedClusterSets are bound to workload namespaces. The namespace-scoped placements specify a portion of ManagedClusterSets that select a working set of the potential ManagedClusters . Placements filter ManagedClusters from ManagedClusterSets by using labelSelector and claimSelector . The placement of ManagedClusters can be controlled by using taints and tolerations. Placements sort the clusters by Prioritizers scores and select the top n clusters from that group. You can define n in numberOfClusters . Placements do not select managed clusters that you are deleting. Notes: You must bind at least one ManagedClusterSet to a namespace by creating a ManagedClusterSetBinding in that namespace. You must have role-based access to CREATE on the virtual sub-resource of managedclustersets/bind . 1.7.14.1.1. Additional resources See Using taints and tolerations to place managed clusters for more information. To learn more about the API and Prioritizers , see Placements API . Return to Selecting ManagedClusters with placement . 1.7.14.2. Selecting ManagedClusters from ManagedClusterSets You can select which ManagedClusters to filter by using labelSelector or claimSelector . See the following examples to learn how to use both filters: In the following example, the labelSelector only matches clusters with the label vendor: OpenShift : apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Placement metadata: name: placement namespace: ns1 spec: predicates: - requiredClusterSelector: labelSelector: matchLabels: vendor: OpenShift In the following example, claimSelector only matches clusters with region.open-cluster-management.io with us-west-1 : apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Placement metadata: name: placement namespace: ns1 spec: predicates: - requiredClusterSelector: claimSelector: matchExpressions: - key: region.open-cluster-management.io operator: In values: - us-west-1 You can also filter ManagedClusters from particular cluster sets by using the clusterSets parameter. In the following example, claimSelector only matches the cluster sets clusterset1 and clusterset2 : apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Placement metadata: name: placement namespace: ns1 spec: clusterSets: - clusterset1 - clusterset2 predicates: - requiredClusterSelector: claimSelector: matchExpressions: - key: region.open-cluster-management.io operator: In values: - us-west-1 You can also choose how many ManagedClusters you want to filter by using the numberOfClusters paremeter. See the following example: apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Placement metadata: name: placement namespace: ns1 spec: numberOfClusters: 3 1 predicates: - requiredClusterSelector: labelSelector: matchLabels: vendor: OpenShift claimSelector: matchExpressions: - key: region.open-cluster-management.io operator: In values: - us-west-1 1 Specify how many ManagedClusters you want to select. The example is set to 3 . 1.7.14.2.1. Filtering ManagedClusters by defining tolerations with placement To learn how to filter ManagedClusters with matching taints, see the following examples: By default, the placement cannot select cluster1 in the following example: apiVersion: cluster.open-cluster-management.io/v1 kind: ManagedCluster metadata: name: cluster1 spec: hubAcceptsClient: true taints: - effect: NoSelect key: gpu value: "true" timeAdded: '2022-02-21T08:11:06Z' To select cluster1 you must define tolerations. See the following example: apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Placement metadata: name: placement namespace: ns1 spec: tolerations: - key: gpu value: "true" operator: Equal You can also select ManagedClusters with matching taints for a specified amount of time by using the tolerationSeconds parameter. tolerationSeconds defines how long a toleration stays bound to a taint. tolerationSeconds can automatically transfer applications that are deployed on a cluster that goes offline to another managed cluster after a specified length of time. Learn how to use tolerationSeconds by viewing the following examples: In the following example, the managed cluster becomes unreachable: apiVersion: cluster.open-cluster-management.io/v1 kind: ManagedCluster metadata: name: cluster1 spec: hubAcceptsClient: true taints: - effect: NoSelect key: cluster.open-cluster-management.io/unreachable timeAdded: '2022-02-21T08:11:06Z' If you define a placement with tolerationSeconds , the workload is transferred to another available managed cluster. See the following example: apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Placement metadata: name: placement namespace: ns1 spec: tolerations: - key: cluster.open-cluster-management.io/unreachable operator: Exists tolerationSeconds: 300 1 1 Specify after how many seconds you want the workload to be transferred. 1.7.14.2.2. Filtering ManagedClusters based on add-on status You might want to select managed clusters for your placements based on the status of the add-ons that are deployed on them. For example, you can select a managed cluster for your placement only if there is a specific add-on that is enabled on the managed cluster. You can specify the label for the add-on, as well as its status, when you create the placement. A label is automatically created on a ManagedCluster resource if an add-on is enabled on the managed cluster. The label is automatically removed if the add-on is disabled. Each add-on is represented by a label in the format of feature.open-cluster-management.io/addon-<addon_name>=<status_of_addon> . Replace addon_name with the name of the add-on that you want to enable on the selected managed cluster. Replace status_of_addon with the status that you want the add-on to have if the managed cluster is selected. See the following table of possible value for status_of_addon : Value Description available The add-on is enabled and available. unhealthy The add-on is enabled, but the lease is not updated continuously. unreachable The add-on is enabled, but there is no lease found for it. This can also be caused when the managed cluster is offline. For example, an available application-manager add-on is represented by a label on the managed cluster that reads the following: See the following examples to learn how to create placements based on add-ons and their status: The following placement example includes all managed clusters that have application-manager enabled on them: apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Placement metadata: name: placement1 namespace: ns1 spec: predicates: - requiredClusterSelector: labelSelector: matchExpressions: - key: feature.open-cluster-management.io/addon-application-manager operator: Exists The following placement example includes all managed clusters that have application-manager enabled with an available status: apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Placement metadata: name: placement2 namespace: ns1 spec: predicates: - requiredClusterSelector: labelSelector: matchLabels: "feature.open-cluster-management.io/addon-application-manager": "available" The following placement example includes all managed clusters that have application-manager disabled: apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Placement metadata: name: placement3 namespace: ns1 spec: predicates: - requiredClusterSelector: labelSelector: matchExpressions: - key: feature.open-cluster-management.io/addon-application-manager operator: DoesNotExist 1.7.14.2.3. Prioritizing ManagedClusters by defining prioritizerPolicy with placement View the following examples to learn how to prioritize ManagedClusters by using the prioritizerPolicy parameter with placement. The following example selects a cluster with the largest allocatable memory: Note: Similar to Kubernetes Node Allocatable , 'allocatable' is defined as the amount of compute resources that are available for pods on each cluster. apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Placement metadata: name: placement namespace: ns1 spec: numberOfClusters: 1 prioritizerPolicy: configurations: - scoreCoordinate: builtIn: ResourceAllocatableMemory The following example selects a cluster with the largest allocatable CPU and memory, and makes placement sensitive to resource changes: apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Placement metadata: name: placement namespace: ns1 spec: numberOfClusters: 1 prioritizerPolicy: configurations: - scoreCoordinate: builtIn: ResourceAllocatableCPU weight: 2 - scoreCoordinate: builtIn: ResourceAllocatableMemory weight: 2 The following example selects two clusters with the largest addOn score CPU ratio, and pins the placement decisions: apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Placement metadata: name: placement namespace: ns1 spec: numberOfClusters: 2 prioritizerPolicy: mode: Exact configurations: - scoreCoordinate: builtIn: Steady weight: 3 - scoreCoordinate: type: AddOn addOn: resourceName: default scoreName: cpuratio 1.7.14.2.4. Additional resources See Node Allocatable for more details. Return to Selecting ManagedClusters with placement for other topics. 1.7.14.3. Checking selected ManagedClusters by using PlacementDecisions One or more PlacementDecision kinds with the label cluster.open-cluster-management.io/placement={placement_name} are created to represent ManagedClusters selected by a placement. If you select a ManagedCluster and add it to a PlacementDecision , the components that consume this placement might apply the workload on this ManagedCluster . When you do not select ManagedCluster and you remove it from the PlacementDecision , the workload that is applied on this ManagedCluster is removed. You can prevent the workload removal by defining tolerations. To learn more about defining tolerations, see Filtering ManagedClusters by defining tolerations with placement . See the following PlacementDecision example: apiVersion: cluster.open-cluster-management.io/v1beta1 kind: PlacementDecision metadata: labels: cluster.open-cluster-management.io/placement: placement1 name: placement1-kbc7q namespace: ns1 ownerReferences: - apiVersion: cluster.open-cluster-management.io/v1beta1 blockOwnerDeletion: true controller: true kind: Placement name: placement1 uid: 05441cf6-2543-4ecc-8389-1079b42fe63e status: decisions: - clusterName: cluster1 reason: '' - clusterName: cluster2 reason: '' - clusterName: cluster3 reason: '' 1.7.14.3.1. Additional resources To learn more about the API, see PlacementDecisions API . 1.7.15. Managing cluster pools (Technology Preview) Cluster pools provide rapid and cost-effective access to configured Red Hat OpenShift Container Platform clusters on-demand and at scale. Cluster pools provision a configurable and scalable number of OpenShift Container Platform clusters on Amazon Web Services, Google Cloud Platform, or Microsoft Azure that can be claimed when they are needed. They are especially useful when providing or replacing cluster environments for development, continuous integration, and production scenarios. You can specify a number of clusters to keep running so that they are available to be claimed immediately, while the remainder of the clusters will be kept in a hibernating state so that they can be resumed and claimed within a few minutes. ClusterClaim resources are used to check out clusters from cluster pools. When a cluster claim is created, the pool assigns a running cluster to it. If no running clusters are available, a hibernating cluster is resumed to provide the cluster or a new cluster is provisioned. The cluster pool automatically creates new clusters and resumes hibernating clusters to maintain the specified size and number of available running clusters in the pool. Creating a cluster pool Claiming clusters from cluster pools Updating the cluster pool release image Scaling cluster pools Destroying a cluster pool The procedure for creating a cluster pool is similar to the procedure for creating a cluster. Clusters in a cluster pool are not created for immediate use. 1.7.15.1. Creating a cluster pool The procedure for creating a cluster pool is similar to the procedure for creating a cluster. Clusters in a cluster pool are not created for immediate use. Required access : Administrator 1.7.15.1.1. Prerequisites See the following prerequisites before creating a cluster pool: You need to deploy a multicluster engine operator hub cluster. You need Internet access for your multicluster engine operator hub cluster so that it can create the Kubernetes cluster on the provider environment. You need an AWS, GCP, or Microsoft Azure provider credential. See Managing credentials overview for more information. You need a configured domain in your provider environment. See your provider documentation for instructions about how to configure a domain. You need provider login credentials. You need your OpenShift Container Platform image pull secret. See Using image pull secrets . Note: Adding a cluster pool with this procedure configures it so it automatically imports the cluster for multicluster engine operator management when you claim a cluster from the pool. If you want to create a cluster pool that does not automatically import the claimed cluster for management with the cluster claim, add the following annotation to your clusterClaim resource: kind: ClusterClaim metadata: annotations: cluster.open-cluster-management.io/createmanagedcluster: "false" 1 1 The word "false" must be surrounded by quotation marks to indicate that it is a string. 1.7.15.1.2. Create the cluster pool To create a cluster pool, select Infrastructure > Clusters in the navigation menu. The Cluster pools tab lists the cluster pools that you can access. Select Create cluster pool and complete the steps in the console. If you do not have a infrastructure credential that you want to use for the cluster pool, you can create one by selecting Add credential . You can either select an existing namespace from the list, or type the name of a new one to create one. The cluster pool does not have to be in the same namespace as the clusters. You can select a cluster set name if you want the RBAC roles for your cluster pool to share the role assignments of an existing cluster set. The cluster set for the clusters in the cluster pool can only be set when you create the cluster pool. You cannot change the cluster set association for the cluster pool or for the clusters in the cluster pool after you create the cluster pool. Any cluster that you claim from the cluster pool is automatically added to the same cluster set as the cluster pool. Note: If you do not have cluster admin permissions, you must select a cluster set. The request to create a cluster set is rejected with a forbidden error if you do not include the cluster set name in this situation. If no cluster sets are available for you to select, contact your cluster administrator to create a cluster set and give you clusterset admin permissions to it. The cluster pool size specifies the number of clusters that you want provisioned in your cluster pool, while the cluster pool running count specifies the number of clusters that the pool keeps running and ready to claim for immediate use. The procedure is very similar to the procedure for creating clusters. For specific information about the information that is required for your provider, see the following information: Creating a cluster on Amazon Web Services Creating a cluster on Google Cloud Platform Creating a cluster on Microsoft Azure 1.7.15.2. Claiming clusters from cluster pools ClusterClaim resources are used to check out clusters from cluster pools. A claim is completed when a cluster is running and ready in the cluster pool. The cluster pool automatically creates new running and hibernated clusters in the cluster pool to maintain the requirements that are specified for the cluster pool. Note: When a cluster that was claimed from the cluster pool is no longer needed and is destroyed, the resources are deleted. The cluster does not return to the cluster pool. Required access : Administrator 1.7.15.2.1. Prerequisites You must have a cluster pool with or without available clusters. If there are available clusters in the cluster pool, the available clusters are claimed. If there are no available clusters in the cluster pool, a cluster is created to fulfill the claim. See Creating a cluster pool for information about how to create a cluster pool. 1.7.15.2.2. Claim the cluster from the cluster pool When you create a cluster claim, you request a new cluster from the cluster pool. A cluster is checked out from the pool when a cluster is available. The claimed cluster is automatically imported as one of your managed clusters, unless you disabled automatic import. Complete the following steps to claim a cluster: From the navigation menu, click Infrastructure > Clusters , and select the Cluster pools tab. Find the name of the cluster pool you want to claim a cluster from and select Claim cluster . If a cluster is available, it is claimed and immediately appears in the Managed clusters tab. If there are no available clusters, it might take several minutes to resume a hibernated cluster or provision a new cluster. During this time, the claim status is pending . Expand the cluster pool to view or delete pending claims against it. The claimed cluster remains a member of the cluster set that it was associated with when it was in the cluster pool. You cannot change the cluster set of the claimed cluster when you claim it. Note: Changes to the pull secret, SSH keys, or base domain of the cloud provider credentials are not reflected for existing clusters that are claimed from a cluster pool, as they have already been provisioned using the original credentials. You cannot edit cluster pool information by using the console, but you can update it by updating its information using the CLI interface. You can also create a new cluster pool with a credential that contains the updated information. The clusters that are created in the new pool use the settings provided in the new credential. 1.7.15.3. Updating the cluster pool release image When the clusters in your cluster pool remain in hibernation for some time, the Red Hat OpenShift Container Platform release image of the clusters might become backlevel. If this happens, you can upgrade the version of the release image of the clusters that are in your cluster pool. Required access : Edit Complete the following steps to update the OpenShift Container Platform release image for the clusters in your cluster pool: Note: This procedure does not update clusters from the cluster pool that are already claimed in the cluster pool. After you complete this procedure, the updates to the release images only apply to the following clusters that are related to the cluster pool: Clusters that are created by the cluster pool after updating the release image with this procedure. Clusters that are hibernating in the cluster pool. The existing hibernating clusters with the old release image are destroyed, and new clusters with the new release image replace them. From the navigation menu, click Infrastructure > Clusters . Select the Cluster pools tab. Find the name of the cluster pool that you want to update in the Cluster pools table. Click the Options menu for the Cluster pools in the table, and select Update release image . Select a new release image to use for future cluster creations from this cluster pool. The cluster pool release image is updated. Tip: You can update the release image for multiple cluster pools with one action by selecting the box for each of the cluster pools and using the Actions menu to update the release image for the selected cluster pools. 1.7.15.4. Scaling cluster pools (Technology Preview) You can change the number of clusters in the cluster pool by increasing or decreasing the number of clusters in the cluster pool size. Required access : Cluster administrator Complete the following steps to change the number of clusters in your cluster pool: From the navigation menu, click Infrastructure > Clusters . Select the Cluster pools tab. In the Options menu for the cluster pool that you want to change, select Scale cluster pool . Change the value of the pool size. Optionally, you can update the number of running clusters to increase or decrease the number of clusters that are immediately available when you claim them. Your cluster pools are scaled to reflect your new values. 1.7.15.5. Destroying a cluster pool If you created a cluster pool and determine that you no longer need it, you can destroy the cluster pool. Important: You can only destroy cluster pools that do not have any cluster claims. Required access : Cluster administrator To destroy a cluster pool, complete the following steps: From the navigation menu, click Infrastructure > Clusters . Select the Cluster pools tab. In the Options menu for the cluster pool that you want to delete, type confirm in the confirmation box and select Destroy . Notes: The Destroy button is disabled if the cluster pool has any cluster claims. The namespace that contains the cluster pool is not deleted. Deleting the namespace destroys any clusters that have been claimed from the cluster pool, since the cluster claim resources for these clusters are created in the same namespace. Tip: You can destroy multiple cluster pools with one action by selecting the box for each of the cluster pools and using the Actions menu to destroy the selected cluster pools. 1.7.16. Enabling ManagedServiceAccount add-ons When you install a supported version of multicluster engine operator, the ManagedServiceAccount add-on is enabled by default. Important: If you upgraded your hub cluster from multicluster engine operator version 2.4 and did not enable the ManagedServiceAccount add-on before upgrading, you must enable the add-on manually. The ManagedServiceAccount allows you to create or delete a service account on a managed cluster. Required access: Editor When a ManagedServiceAccount custom resource is created in the <managed_cluster> namespace on the hub cluster, a ServiceAccount is created on the managed cluster. A TokenRequest is made with the ServiceAccount on the managed cluster to the Kubernetes API server on the managed cluster. The token is then stored in a Secret in the <target_managed_cluster> namespace on the hub cluster. Note: The token can expire and be rotated. See TokenRequest for more information about token requests. 1.7.16.1. Prerequisites You need a supported Red Hat OpenShift Container Platform environment. You need the multicluster engine operator installed. 1.7.16.2. Enabling ManagedServiceAccount To enable a ManagedServiceAccount add-on for a hub cluster and a managed cluster, complete the following steps: Enable the ManagedServiceAccount add-on on hub cluster. See Advanced configuration to learn more. Deploy the ManagedServiceAccount add-on and apply it to your target managed cluster. Create the following YAML file and replace target_managed_cluster with the name of the managed cluster where you are applying the Managed-ServiceAccount add-on: apiVersion: addon.open-cluster-management.io/v1alpha1 kind: ManagedClusterAddOn metadata: name: managed-serviceaccount namespace: <target_managed_cluster> spec: installNamespace: open-cluster-management-agent-addon Run the following command to apply the file: You have now enabled the ManagedServiceAccount plug-in for your managed cluster. See the following steps to configure a ManagedServiceAccount . Create a ManagedServiceAccount custom resource with the following YAML source: apiVersion: authentication.open-cluster-management.io/v1alpha1 kind: ManagedServiceAccount metadata: name: <managedserviceaccount_name> namespace: <target_managed_cluster> spec: rotation: {} Replace managed_serviceaccount_name with the name of your ManagedServiceAccount . Replace target_managed_cluster with the name of the managed cluster to which you are applying the ManagedServiceAccount . To verify, view the tokenSecretRef attribute in the ManagedServiceAccount object status to find the secret name and namespace. Run the following command with your account and cluster name: oc get managedserviceaccount <managed_serviceaccount_name> -n <target_managed_cluster> -o yaml View the Secret containing the retrieved token that is connected to the created ServiceAccount on the managed cluster. Run the following command: oc get secret <managed_serviceaccount_name> -n <target_managed_cluster> -o yaml 1.7.17. Cluster lifecycle advanced configuration You can configure some cluster settings during or after installation. 1.7.17.1. Customizing API server certificates The managed clusters communicate with the hub cluster through a mutual connection with the OpenShift Kube API server external load balancer. The default OpenShift Kube API server certificate is issued by an internal Red Hat OpenShift Container Platform cluster certificate authority (CA) when OpenShift Container Platform is installed. If necessary, you can add or change certificates. Changing the API server certificate might impact the communication between the managed cluster and the hub cluster. When you add the named certificate before installing the product, you can avoid an issue that might leave your managed clusters in an offline state. The following list contains some examples of when you might need to update your certificates: You want to replace the default API server certificate for the external load balancer with your own certificate. By following the guidance in Adding API server certificates in the OpenShift Container Platform documentation, you can add a named certificate with host name api.<cluster_name>.<base_domain> to replace the default API server certificate for the external load balancer. Replacing the certificate might cause some of your managed clusters to move to an offline state. If your clusters are in an offline state after upgrading the certificates, follow the troubleshooting instructions for Troubleshooting imported clusters offline after certificate change to resolve it. Note: Adding the named certificate before installing the product helps to avoid your clusters moving to an offline state. The named certificate for the external load balancer is expiring and you need to replace it. If both the old and the new certificate share the same root CA certificate, despite the number of intermediate certificates, you can follow the guidance in Adding API server certificates in the OpenShift Container Platform documentation to create a new secret for the new certificate. Then update the serving certificate reference for host name api.<cluster_name>.<base_domain> to the new secret in the APIServer custom resource. Otherwise, when the old and new certificates have different root CA certificates, complete the following steps to replace the certificate: Locate your APIServer custom resource, which resembles the following example: apiVersion: config.openshift.io/v1 kind: APIServer metadata: name: cluster spec: audit: profile: Default servingCerts: namedCertificates: - names: - api.mycluster.example.com servingCertificate: name: old-cert-secret Create a new secret in the openshift-config namespace that contains the content of the existing and new certificates by running the following commands: Copy the old certificate into a new certificate: cp old.crt combined.crt Add the contents of the new certificate to the copy of the old certificate: cat new.crt >> combined.crt Apply the combined certificates to create a secret: oc create secret tls combined-certs-secret --cert=combined.crt --key=old.key -n openshift-config Update your APIServer resource to reference the combined certificate as the servingCertificate . apiVersion: config.openshift.io/v1 kind: APIServer metadata: name: cluster spec: audit: profile: Default servingCerts: namedCertificates: - names: - api.mycluster.example.com servingCertificate: name: combined-cert-secret After about 15 minutes, the CA bundle containing both new and old certificates is propagated to the managed clusters. Create another secret named new-cert-secret in the openshift-config namespace that contains only the new certificate information by entering the following command: oc create secret tls new-cert-secret --cert=new.crt --key=new.key -n openshift-config {code} Update the APIServer resource by changing the name of servingCertificate to reference the new-cert-secret . Your resource might resemble the following example: apiVersion: config.openshift.io/v1 kind: APIServer metadata: name: cluster spec: audit: profile: Default servingCerts: namedCertificates: - names: - api.mycluster.example.com servingCertificate: name: new-cert-secret After about 15 minutes, the old certificate is removed from the CA bundle, and the change is automatically propagated to the managed clusters. Note: Managed clusters must use the host name api.<cluster_name>.<base_domain> to access the hub cluster. You cannot use named certificates that are configured with other host names. 1.7.17.2. Configuring the proxy between hub cluster and managed cluster To register a managed cluster to your multicluster engine for Kubernetes operator hub cluster, you need to transport the managed cluster to your multicluster engine operator hub cluster. Sometimes your managed cluster cannot directly reach your multicluster engine operator hub cluster. In this instance, configure the proxy settings to allow the communications from the managed cluster to access the multicluster engine operator hub cluster through a HTTP or HTTPS proxy server. For example, the multicluster engine operator hub cluster is in a public cloud, and the managed cluster is in a private cloud environment behind firewalls. The communications out of the private cloud can only go through a HTTP or HTTPS proxy server. 1.7.17.2.1. Prerequisites You have a HTTP or HTTPS proxy server running that supports HTTP tunnels. For example, HTTP connect method. You have a manged cluster that can reach the HTTP or HTTPS proxy server, and the proxy server can access the multicluster engine operator hub cluster. Complete the following steps to configure the proxy settings between hub cluster and managed cluster: Create a KlusterConfig resource with proxy settings. See the following configuration with HTTP proxy: apiVersion: config.open-cluster-management.io/v1alpha1 kind: KlusterletConfig metadata: name: http-proxy spec: hubKubeAPIServerConfig: proxyURL: "http://<username>:<password>@<ip>:<port>" See the following configuration with HTTPS proxy: apiVersion: config.open-cluster-management.io/v1alpha1 kind: KlusterletConfig metadata: name: https-proxy spec: hubKubeAPIServerConfig: proxyURL: "https://<username>:<password>@<ip>:<port>" trustedCABundles: - name: "proxy-ca-bundle" caBundle: name: <configmap-name> namespace: <configmap-namespace> Note: A CA bundle is required for HTTPS proxy. It refers to a ConfigMap containing one or multiple CA certificates. You can create the ConfigMap by running the following command: oc create -n <configmap-namespace> configmap <configmap-name> --from-file=ca.crt=/path/to/ca/file When creating a managed cluster, choose the KlusterletConfig resource by adding an annotation that refers to the KlusterletConfig resource. See the following example: apiVersion: cluster.open-cluster-management.io/v1 kind: ManagedCluster metadata: annotations: agent.open-cluster-management.io/klusterlet-config: <klusterlet-config-name> name:<managed-cluster-name> spec: hubAcceptsClient: true leaseDurationSeconds: 60 Notes: You might need to toggle the YAML view to add the annotation to the ManagedCluster resource when you operate on the multicluster engine operator console. You can use a global KlusterletConfig to enable the configuration on every managed cluster without using an annotation for binding. 1.7.17.2.2. Disabling the proxy between hub cluster and managed cluster If your development changes, you might need to disable the HTTP or HTTPS proxy. Go to the ManagedCluster resource. Remove the agent.open-cluster-management.io/klusterlet-config annotation. 1.7.17.2.3. Optional: Configuring the klusterlet to run on specific nodes When you create a cluster using Red Hat Advanced Cluster Management for Kubernetes, you can specify which nodes you want to run the managed cluster klusterlet to run on by configuring the nodeSelector and tolerations annotation for the managed cluster. Complete the following steps to configure these settings: Select the managed cluster that you want to update from the clusters page in the console. Set the YAML switch to On to view the YAML content. Note: The YAML editor is only available when importing or creating a cluster. To edit the managed cluster YAML definition after importing or creating, you must use the OpenShift Container Platform command-line interface or the Red Hat Advanced Cluster Management search feature. Add the nodeSelector annotation to the managed cluster YAML definition. The key for this annotation is: open-cluster-management/nodeSelector . The value of this annotation is a string map with JSON formatting. Add the tolerations entry to the managed cluster YAML definition. The key of this annotation is: open-cluster-management/tolerations . The value of this annotation represents a toleration list with JSON formatting. The resulting YAML might resemble the following example: apiVersion: cluster.open-cluster-management.io/v1 kind: ManagedCluster metadata: annotations: open-cluster-management/nodeSelector: '{"dedicated":"acm"}' open-cluster-management/tolerations: '[{"key":"dedicated","operator":"Equal","value":"acm","effect":"NoSchedule"}]' To make sure your content is deployed to the correct nodes, complete the steps in Configuring nodeSelectors and tolerations for klusterlet add-ons . 1.7.17.3. Customizing the server URL and CA bundle of the hub cluster API server when importing a managed cluster (Technology Preview) You might not be able to register a managed cluster on your multicluster engine operator hub cluster if intermediate components exist between the managed cluster and the hub cluster. Example intermediate components include a Virtual IP, load balancer, reverse proxy, or API gateway. If you have an intermediate component, you must use a custom server URL and CA bundle for the hub cluster API server when importing a managed cluster. 1.7.17.3.1. Prerequisites You must configure the intermediate component so that the hub cluster API server is accessible for the managed cluster. If the intermediate component terminates the SSL connections between the managed cluster and hub cluster API server, you must bridge the SSL connections and pass the authentication information from the original requests to the back end of the hub cluster API server. You can use the User Impersonation feature of the Kubernetes API server to bridge the SSL connections. The intermediate component extracts the client certificate from the original requests, adds Common Name (CN) and Organization (O) of the certificate subject as impersonation headers, and then forwards the modified impersonation requests to the back end of the hub cluster API server. Note: If you bridge the SSL connections, the cluster proxy add-on does not work. 1.7.17.3.2. Customizing the server URL and hub cluster CA bundle To use a custom hub API server URL and CA bundle when importing a managed cluster, complete the following steps: Create a KlusterConfig resource with the custom hub cluster API server URL and CA bundle. See the following example: apiVersion: config.open-cluster-management.io/v1alpha1 kind: KlusterletConfig metadata: name: <name> 1 spec: hubKubeAPIServerConfig: url: "https://api.example.com:6443" 2 serverVerificationStrategy: UseCustomCABundles trustedCABundles: - name: <custom-ca-bundle> 3 caBundle: name: <custom-ca-bundle-configmap> 4 namespace: <multicluster-engine> 5 1 Add your klusterlet config name. 2 Add your custom server URL. 3 Add your custom CA bundle name. You can use any value except auto-detected , which is reserved for internal use. 4 Add your name of the CA bundle ConfigMap. You can create the ConfigMap by running the following command: oc create -n <configmap-namespace> configmap <configmap-name> --from-file=ca.crt=/path/to/ca/file 5 Add your namespace of the CA bundle ConfigMap. Select the KlusterletConfig resource when creating a managed cluster by adding an annotation that refers to the resource. See the following example: apiVersion: cluster.open-cluster-management.io/v1 kind: ManagedCluster metadata: annotations: agent.open-cluster-management.io/klusterlet-config: 1 name: 2 spec: hubAcceptsClient: true leaseDurationSeconds: 60 1 Add your klusterlet config name. 2 Add your cluster name. Notes: If you use the console, you might need to enable the YAML view to add the annotation to the ManagedCluster resource. You can use a global KlusterletConfig to enable the configuration on every managed cluster without using an annotation for binding. 1.7.17.3.3. Configuring the global KlusterletConfig If you create a KlusterletConfig resource and set the name to global , the configurations in the global KlusterletConfig are automatically applied on every managed cluster. In an environment that has a global KlusterletConfig , you can also create a cluster-specific KlusterletConfig and bind it with a managed cluster by adding the agent.open-cluster-management.io/klusterlet-config: <klusterletconfig-name> annotation to the ManagedCluster resource . The value of the cluster-specific KlusterletConfig overrides the global KlusterletConfig value if you set different values for the same field. See the following example where the hubKubeAPIServerURL field has different values set in your KlusterletConfig and the global KlusterletConfig . The "https://api.example.test.com:6443" value overrides the "https://api.example.global.com:6443" value: Deprecation: The hubKubeAPIServerURL field is deprecated. apiVersion: config.open-cluster-management.io/v1alpha1 kind: KlusterletConfig metadata: name: test spec: hubKubeAPIServerConfig: url: "https://api.example.test.com:6443" --- apiVersion: config.open-cluster-management.io/v1alpha1 kind: KlusterletConfig metadata: name: global spec: hubKubeAPIServerConfig: url: "https://api.example.global.com:6443" The value of the global KlusterletConfig is used if there is no cluster-specific KlusterletConfig bound to a managed cluster, or the same field is missing or does not have a value in the cluster-specific KlusterletConfig . See the following example, where the "example.global.com" value in the hubKubeAPIServerURL field of the global KlusterletConfig overrides your KlusterletConfig : apiVersion: config.open-cluster-management.io/v1alpha1 kind: KlusterletConfig metadata: name: test spec: hubKubeAPIServerURL: "" - apiVersion: config.open-cluster-management.io/v1alpha1 kind: KlusterletConfig metadata: name: global spec: hubKubeAPIServerURL: "example.global.com" See the following example, where the "example.global.com" value in the hubKubeAPIServerURL field of the global KlusterletConfig also overrides your KlusterletConfig : apiVersion: config.open-cluster-management.io/v1alpha1 kind: KlusterletConfig metadata: name: test - apiVersion: config.open-cluster-management.io/v1alpha1 kind: KlusterletConfig metadata: name: global spec: hubKubeAPIServerURL: "example.global.com" 1.7.17.4. Configuring the hub cluster KubeAPIServer verification strategy Managed clusters communicate with the hub cluster through a mutual connection with the OpenShift Container Platform KubeAPIServer external load balancer. An internal OpenShift Container Platform cluster certificate authority (CA) issues the default OpenShift Container Platform KubeAPIServer certificate when you install OpenShift Container Platform. The multicluster engine for Kubernetes operator automatically detects and adds the certificate to managed clusters in the bootstrap-kubeconfig-secret namespace. If your automatically detected certificate does not work, you can manually configure a strategy configuration in the KlusterletConfig resource. Manually configuring the strategy allows you to control how you verify the hub cluster KubeAPIServer certificate. See the examples in one of the following three strategies to learn how to manually configure a strategy: 1.7.17.4.1. Configuring the strategy with UseAutoDetectedCABundle The default configuration strategy is UseAutoDetectedCABundle . The multicluster engine operator automatically detects the certificate on the hub cluster and merges the certificate configured in the trustedCABundles list of config map references to the real CA bundles, if there are any. The following example merges the automatically detected certificates from the hub cluster and the certificates that you configured in the new-ocp-ca config map, and adds both to the managed cluster: apiVersion: config.open-cluster-management.io/v1alpha1 kind: KlusterletConfig metadata: name: ca-strategy spec: hubKubeAPIServerConfig: serverVerificationStrategy: UseAutoDetectedCABundle trustedCABundles: - name: new-ca caBundle: name: new-ocp-ca namespace: default 1.7.17.4.2. Configuring the strategy with UseSystemTruststore With UseSystemTruststore , multicluster engine operator does not detect any certificate and ignores the certificates configured in the trustedCABundles parameter section. This configuration does not pass any certificate to the managed clusters. Instead, the managed clusters use certificates from the system trusted store of the managed clusters to verify the hub cluster API server. This applies to situations where a public CA, such as Let's Encrypt , issues the hub cluster certificate. See the following example that uses UseSystemTruststore : apiVersion: config.open-cluster-management.io/v1alpha1 kind: KlusterletConfig metadata: name: ca-strategy spec: hubKubeAPIServerConfig: serverVerificationStrategy: UseSystemTruststore 1.7.17.4.3. Configuring the strategy with UseCustomCABundles You can use UseCustomCABundles if you know the CA of the hub cluster API server and do not want multicluster engine operator to automatically detect it. For this strategy, multicluster engine operator adds your configured certificates from the trustedCABundles parameter to the managed clusters. See the following examples to learn how to use UseCustomCABundles : apiVersion: config.open-cluster-management.io/v1alpha1 kind: KlusterletConfig metadata: name: ca-strategy spec: hubKubeAPIServerConfig: serverVerificationStrategy: UseCustomCABundles trustedCABundles: - name: ca caBundle: name: ocp-ca namespace: default Typically, this policy is the same for each managed cluster. The hub cluster administrator can configure a KlusterletConfig named global to activate the policy for each managed cluster when you install multicluster engine operator or the hub cluster certificate changes. See the following example: apiVersion: config.open-cluster-management.io/v1alpha1 kind: KlusterletConfig metadata: name: global spec: hubKubeAPIServerConfig: serverVerificationStrategy: UseSystemTruststore When a managed cluster needs to use a different strategy, you can also create a different KlusterletConfig and use the agent.open-cluster-management.io/klusterlet-config annotation in the managed clusters to point to a specific strategy. See the following example: apiVersion: config.open-cluster-management.io/v1alpha1 kind: KlusterletConfig metadata: name: test-ca spec: hubKubeAPIServerConfig: serverVerificationStrategy: UseCustomCABundles trustedCABundles: - name: ca caBundle: name: ocp-ca namespace: default -- apiVersion: cluster.open-cluster-management.io/v1 kind: ManagedCluster metadata: annotations: agent.open-cluster-management.io/klusterlet-config: test-ca name: cluster1 spec: hubAcceptsClient: true leaseDurationSeconds: 60 1.7.17.5. Additional resources Adding API server certificates Troubleshooting imported clusters offline after certificate change Configuring proxy settings for cluster proxy add-ons 1.7.18. Removing a cluster from management When you remove an OpenShift Container Platform cluster from management that was created with multicluster engine operator, you can either detach it or destroy it. Detaching a cluster removes it from management, but does not completely delete it. You can import it again if you want to manage it. This is only an option when the cluster is in a Ready state. The following procedures remove a cluster from management in either of the following situations: You already deleted the cluster and want to remove the deleted cluster from Red Hat Advanced Cluster Management. You want to remove the cluster from management, but have not deleted the cluster. Important: Destroying a cluster removes it from management and deletes the components of the cluster. When you detach or destroy a managed cluster, the related namespace is automatically deleted. Do not place custom resources in this namespace. Removing a cluster by using the console Removing a cluster by using the command line Removing remaining resources after removing a cluster Defragmenting the etcd database after removing a cluster 1.7.18.1. Removing a cluster by using the console From the navigation menu, navigate to Infrastructure > Clusters and select Destroy cluster or Detach cluster from the options menu beside the cluster that you want to remove from management. Tip: You can detach or destroy multiple clusters by selecting the check boxes of the clusters that you want to detach or destroy and selecting Detach or Destroy . Note: If you attempt to detach the hub cluster while it is managed, which is called a local-cluster , check to see if the default setting of disableHubSelfManagement is false . This setting causes the hub cluster to reimport itself and manage itself when it is detached, and it reconciles the MultiClusterHub controller. It might take hours for the hub cluster to complete the detachment process and reimport. To reimport the hub cluster without waiting for the processes to finish, you can enter the following command to restart the multiclusterhub-operator pod and reimport faster: You can change the value of the hub cluster to not import automatically by changing the disableHubSelfManagement value to true , as described in Installing while connected online . 1.7.18.2. Removing a cluster by using the command line To detach a managed cluster by using the command line of the hub cluster, run the following command: To destroy the managed cluster after detaching, run the following command: Notes: To prevent destroying the managed cluster, set the spec.preserveOnDelete parameter to true in the ClusterDeployment custom resource. The default setting of disableHubSelfManagement is false . The false`setting causes the hub cluster, also called `local-cluster , to reimport and manage itself when it is detached and it reconciles the MultiClusterHub controller. The detachment and reimport process might take hours might take hours for the hub cluster to complete. If you want to reimport the hub cluster without waiting for the processes to finish, you can enter the following command to restart the multiclusterhub-operator pod and reimport faster: You can change the value of the hub cluster to not import automatically by changing the disableHubSelfManagement value to true . See Installing while connected online . 1.7.18.3. Removing remaining resources after removing a cluster If there are remaining resources on the managed cluster that you removed, there are additional steps that are required to ensure that you remove all of the remaining components. Situations when these extra steps are required include the following examples: The managed cluster was detached before it was completely created, and components like the klusterlet remain on the managed cluster. The hub that was managing the cluster was lost or destroyed before detaching the managed cluster, and there is no way to detach the managed cluster from the hub. The managed cluster was not in an online state when it was detached. If one of these situations apply to your attempted detachment of a managed cluster, there are some resources that cannot be removed from managed cluster. Complete the following steps to detach the managed cluster: Make sure you have the oc command line interface configured. Make sure you have KUBECONFIG configured on your managed cluster. If you run oc get ns | grep open-cluster-management-agent , you should see two namespaces: Remove the klusterlet custom resource by using the following command: oc get klusterlet | grep klusterlet | awk '{print USD1}' | xargs oc patch klusterlet --type=merge -p '{"metadata":{"finalizers": []}}' Run the following command to remove the remaining resources: oc delete namespaces open-cluster-management-agent open-cluster-management-agent-addon --wait=false oc get crds | grep open-cluster-management.io | awk '{print USD1}' | xargs oc delete crds --wait=false oc get crds | grep open-cluster-management.io | awk '{print USD1}' | xargs oc patch crds --type=merge -p '{"metadata":{"finalizers": []}}' Run the following command to ensure that both namespaces and all open cluster management crds are removed: oc get crds | grep open-cluster-management.io | awk '{print USD1}' oc get ns | grep open-cluster-management-agent 1.7.18.4. Defragmenting the etcd database after removing a cluster Having many managed clusters can affect the size of the etcd database in the hub cluster. In OpenShift Container Platform 4.8, when you delete a managed cluster, the etcd database in the hub cluster is not automatically reduced in size. In some scenarios, the etcd database can run out of space. An error etcdserver: mvcc: database space exceeded is displayed. To correct this error, reduce the size of the etcd database by compacting the database history and defragmenting the etcd database. Note: For OpenShift Container Platform version 4.9 and later, the etcd Operator automatically defragments disks and compacts the etcd history. No manual intervention is needed. The following procedure is for OpenShift Container Platform version 4.8 and earlier. Compact the etcd history and defragment the etcd database in the hub cluster by completing the following procedure. 1.7.18.4.1. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. 1.7.18.4.2. Procedure Compact the etcd history. Open a remote shell session to the etcd member, for example: USD oc rsh -n openshift-etcd etcd-control-plane-0.example.com etcdctl endpoint status --cluster -w table Run the following command to compact the etcd history: sh-4.4#etcdctl compact USD(etcdctl endpoint status --write-out="json" | egrep -o '"revision":[0-9]*' | egrep -o '[0-9]*' -m1) Example output USD compacted revision 158774421 Defragment the etcd database and clear any NOSPACE alarms as outlined in Defragmenting etcd data . 1.8. Discovery service introduction You can discover OpenShift 4 clusters that are available from OpenShift Cluster Manager . After discovery, you can import your clusters to manage. The Discovery services uses the Discover Operator for back-end and console usage. You must have an OpenShift Cluster Manager credential. See Creating a credential for Red Hat OpenShift Cluster Manager if you need to create a credential. Required access : Administrator Configure Discovery with the console Configure Discovery using the CLI Enabling a discovered cluster for management 1.8.1. Configure Discovery with the console Configure Discovery in the console to find clusters. When you configure the Discovery feature on your cluster, you must enable a DiscoveryConfig resource to connect to the OpenShift Cluster Manager to begin discovering clusters that are a part of your organization. You can create multiple DiscoveryConfig resources with separate credentials. After you discover clusters, you can import clusters that appear in the Discovered clusters tab of the console. Use the product console to enable Discovery. Required access : Access to the namespace where the credential was created. 1.8.1.1. Prerequisites You need a credential. See Creating a credential for Red Hat OpenShift Cluster Manager to connect to OpenShift Cluster Manager. You need access to the namespaces that were used to configure Discovery. 1.8.1.2. Import discovered clusters from the console To manually import other infrastructure provider discovered clusters, complete the following steps: Go to the existing Clusters page and click the Discovered clusters tab. From the Discovered clusters table, find the cluster that you want to import. From the options menu, choose Import cluster . For discovered clusters, you can import manually using the documentation, or you can choose Import clusters automatically. To import automatically with your credentials or Kubeconfig file, copy and paste the content. Click Import . 1.8.1.3. View discovered clusters After you set up your credentials and discover your clusters for import, you can view them in the console. Click Clusters > Discovered clusters View the populated table with the following information: Name is the display name that is designated in OpenShift Cluster Manager. If the cluster does not have a display name, a generated name based on the cluster console URL is displayed. If the console URL is missing or was modified manually in OpenShift Cluster Manager, the cluster external ID is displayed. Namespace is the namespace where you created the credential and discovered clusters. Type is the discovered cluster Red Hat OpenShift type. Distribution version is the discovered cluster Red Hat OpenShift version. Infrastructure provider is the cloud provider of the discovered cluster. Last active is the last time the discovered cluster was active. Created when the discovered cluster was created. Discovered when the discovered cluster was discovered. You can search for any information in the table, as well. For example, to show only Discovered clusters in a particular namespace, search for that namespace. You can now click Import cluster to create managed clusters. 1.8.2. Enable Discovery using the CLI Enable discovery using the CLI to find clusters that are available from Red Hat OpenShift Cluster Manager. Required access : Administrator 1.8.2.1. Prerequisites Create a credential to connect to Red Hat OpenShift Cluster Manager. 1.8.2.2. Discovery set up and process Note: The DiscoveryConfig must be named discovery and must be created in the same namespace as the selected credential . See the following DiscoveryConfig sample: apiVersion: discovery.open-cluster-management.io/v1 kind: DiscoveryConfig metadata: name: discovery namespace: <NAMESPACE_NAME> spec: credential: <SECRET_NAME> filters: lastActive: 7 openshiftVersions: - "4.15" Replace SECRET_NAME with the credential that you previously set up. Replace NAMESPACE_NAME with the namespace of SECRET_NAME . Enter the maximum time since last activity of your clusters (in days) to discover. For example, with lastActive: 7 , clusters that active in the last 7 days are discovered. Enter the versions of Red Hat OpenShift clusters to discover as a list of strings. Note: Every entry in the openshiftVersions list specifies an OpenShift major and minor version. For example, specifying "4.11" will include all patch releases for the OpenShift version 4.11 , for example 4.11.1 , 4.11.2 . 1.8.2.3. View discovered clusters View discovered clusters by running oc get discoveredclusters -n <namespace> where namespace is the namespace where the discovery credential exists. 1.8.2.3.1. DiscoveredClusters Objects are created by the Discovery controller. These DiscoveredClusters represent the clusters that are found in OpenShift Cluster Manager by using the filters and credentials that are specified in the DiscoveryConfig discoveredclusters.discovery.open-cluster-management.io API. The value for name is the cluster external ID: apiVersion: discovery.open-cluster-management.io/v1 kind: DiscoveredCluster metadata: name: fd51aafa-95a8-41f7-a992-6fb95eed3c8e namespace: <NAMESPACE_NAME> spec: activity_timestamp: "2021-04-19T21:06:14Z" cloudProvider: vsphere console: https://console-openshift-console.apps.qe1-vmware-pkt.dev02.red-chesterfield.com creation_timestamp: "2021-04-19T16:29:53Z" credential: apiVersion: v1 kind: Secret name: <SECRET_NAME> namespace: <NAMESPACE_NAME> display_name: qe1-vmware-pkt.dev02.red-chesterfield.com name: fd51aafa-95a8-41f7-a992-6fb95eed3c8e openshiftVersion: 4.15 status: Stale 1.8.3. Enabling a discovered cluster for management Automatically import supported clusters into your hub cluster with the Discovery-Operator for faster cluster management, without manually importing individual clusters. Required access: Cluster administrator 1.8.3.1. Prerequisites Discovery is enabled by default. If you changed default settings, you need to enable Discovery. You must set up the OpenShift Service on AWS command line interface. See Getting started with the OpenShift Service on AWS CLI documentation. 1.8.3.2. Importing discovered OpenShift Service on AWS and hosted control plane clusters automatically The following procedure is an example of how to import your discovered OpenShift Service on AWS and hosted control planes clusters automatically by using the Discovery-Operator . 1.8.3.2.1. Importing from the console To automatically import the DiscoveredCluster resource, you must modify the resource and set the importAsManagedCluster field to true in the console. See the following procedure: Log in to your hub cluster from the console. Select Search from the navigation menu. From the search bar, enter the following query: "DiscoveredCluster". The DiscoveredCluster resource results appear. Go to the DiscoveredCluster resource and set importAsManagedCluster to true . See the following example, where importAsManagedCluster is set to true and <4.x.z> is your supported OpenShift Container Platform version: apiVersion: discovery.open-cluster-management.io/v1 kind: DiscoveredCluster metadata: name: 28c17977-fc73-4050-b5cc-a5aa2d1d6892 namespace: discovery spec: openshiftVersion: <4.x.z> isManagedCluster: false cloudProvider: aws name: 28c17977-fc73-4050-b5cc-a5aa2d1d6892 displayName: rosa-dc status: Active importAsManagedCluster: true 1 type: <supported-type> 2 1 By setting the field to true , the Discovery-Operator imports the DiscoveredCluster resource, creates a ManagedCluster resource and if the Red Hat Advanced Cluster Management is installed, creates the KlusterletAddOnConfig resource. It also creates the Secret resources for your automatic import. 2 You must use ROSA or MultiClusterEngineHCP as the parameter value. To verify that the DiscoveredCluster resource is imported, go to the Clusters page. Check the import status of your cluster from the Cluster list tab. If you want to detach managed clusters for Discovery to prevent automatic reimport, select the Detach cluster option. The Discovery-Operator adds the following annotation, discovery.open-cluster-management.io/previously-auto-imported: 'true' . Your DiscoveredCluster resource might resemble the following YAML: apiVersion: discovery.open-cluster-management.io/v1 kind: DiscoveredCluster metadata: annotations: discovery.open-cluster-management.io/previously-auto-imported: 'true' To verify that the DiscoveredCluster resource is not reimported automatically, check for the following message in the Discovery-Operator logs, where "rosa-dc" is this discovered cluster: 2024-06-12T14:11:43.366Z INFO reconcile Skipped automatic import for DiscoveredCluster due to existing 'discovery.open-cluster-management.io/previously-auto-imported' annotation {"Name": "rosa-dc"} If you want to reimport the DiscoveredCluster resource automatically, you must remove the previously mentioned annotation. 1.8.3.2.2. Importing from the command line interface To automatically import the DiscoveredCluster resource from the command line complete the following steps: To automatically import the DiscoveredCluster resource, set the importAsManagedCluster paramater to true by using the following command after you log in. Replace <name> and <namespace> with your name and namespace: oc patch discoveredcluster <name> -n <namespace> --type='json' -p='[{"op": "replace", "path": "/spec/importAsManagedCluster", "value": true}]' Run the following command to verify that the cluster was imported as a managed cluster: oc get managedcluster <name> To get a description of your OpenShift Service on AWS cluster ID, run the following command from the OpenShift Service on AWS command line interface: rosa describe cluster --cluster=<cluster-name> | grep -o '^ID:.* For other Kubernetes providers, you must import these infrastructure provider DiscoveredCluster resources manually. Directly apply Kubernetes configurations to the other types of DiscoveredCluster resources. If you enable the importAsManagedCluster field from the DiscoveredCluster resource, it is not imported due to the Discovery webhook. 1.8.3.3. Additional resources See Discovery service introduction . 1.9. Host inventory introduction The host inventory management and on-premises cluster installation are available using the multicluster engine operator central infrastructure management feature. The central infrastructure management feature is an Red Hat OpenShift Container Platform install experience in multicluster engine operator that focuses on managing bare metal hosts during their lifecycle. The Assisted Installer is an install method for OpenShift Container Platform that uses agents to run pre-installed validations on the target hosts, and a central service to evaluate and track install progress. The infrastructure operator for Red Hat OpenShift is a multicluster engine operator component that manages and installs the workloads that run the Assisted Installer service. You can use the console to create a host inventory, which is a pool of bare metal or virtual machines that you can use to create on-premises OpenShift Container Platform clusters. These clusters can be standalone, with dedicated machines for the control plane, or hosted control planes , where the control plane runs as pods on a hub cluster. You can install standalone clusters by using the console, API, or GitOps by using Zero Touch Provisioning (ZTP). See Installing GitOps ZTP in a disconnected environment in the Red Hat OpenShift Container Platform documentation for more information on ZTP. A machine joins the host inventory after booting with a Discovery Image. The Discovery Image is a Red Hat CoreOS live image that contains the following: An agent that performs discovery, validation, and installation tasks. The necessary configuration for reaching the service on the hub cluster, including the endpoint, token, and static network configuration, if applicable. You have one Discovery Image for each infrastructure environment, which is a set of hosts sharing a common set of properties. The InfraEnv custom resource definition represents this infrastructure environment and associated Discovery Image. You can specify the Red Hat Core OS version used for the Discovery Image by setting the osImageVersion field in the InfraEnv custom resource. If you do not specify a value, the latest Red Hat Core OS version is used. After the host boots and the agent contacts the service, the service creates a new Agent custom resource on the hub cluster representing that host. The Agent resources make up the host inventory. You can install hosts in the inventory as OpenShift nodes later. The agent writes the operating system to the disk, along with the necessary configuration, and reboots the host. Note: Red Hat Advanced Cluster Management 2.9 and later and central infrastructure management support the Nutanix platform by using AgentClusterInstall , which requires additional configuration by creating the Nutanix virtual machines. To learn more, see Optional: Installing on Nutanix in the Assisted Installer documentation. Continue reading to learn more about host inventories and central infrastructure management: Enabling the central infrastructure management service Enabling central infrastructure management on Amazon Web Services Creating a host inventory by using the console Creating a host inventory by using the command line interface Configuring advanced networking for an infrastructure environment Adding hosts to the host inventory by using the Discovery Image Automatically adding bare metal hosts to the host inventory Managing your host inventory Creating a cluster in an on-premises environment Importing an on-premises Red Hat OpenShift Container Platform cluster manually by using central infrastructure management 1.9.1. Enabling the central infrastructure management service The central infrastructure management service is provided with the multicluster engine operator and deploys OpenShift Container Platform clusters. The central infrastructure management service is deployed automatically when you enable the MultiClusterHub Operator on the hub cluster, but you have to enable the service manually. See the following sections: Creating a bare metal host custom resource definition Creating or modifying the Provisioning resource Enabling central infrastructure management in disconnected environments Enabling central infrastructure management in connected environments Installing a FIPS-enabled cluster by using the Assisted Installer 1.9.1.1. Prerequisites See the following prerequisites before enabling the central infrastructure management service: You must have a deployed hub cluster on a supported OpenShift Container Platform version and a supported Red Hat Advanced Cluster Management for Kubernetes version. You need internet access for your hub cluster (connected), or a connection to an internal or mirror registry that has a connection to the internet (disconnected) to retrieve the required images for creating the environment. You must open the required ports for bare metal provisioning. See Ensuring required ports are open in the OpenShift Container Platform documentation. You need a bare metal host custom resource definition. You need an OpenShift Container Platform pull secret . See Using image pull secrets for more information. You need a configured default storage class. For disconnected environments only, complete the procedure for Clusters at the network far edge in the OpenShift Container Platform documentation. 1.9.1.2. Creating a bare metal host custom resource definition You need a bare metal host custom resource definition before enabling the central infrastructure management service. Check if you already have a bare metal host custom resource definition by running the following command: oc get crd baremetalhosts.metal3.io If you have a bare metal host custom resource definition, the output shows the date when the resource was created. If you do not have the resource, you receive an error that resembles the following: Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "baremetalhosts.metal3.io" not found If you do not have a bare metal host custom resource definition, download the metal3.io_baremetalhosts.yaml file and apply the content by running the following command to create the resource: oc apply -f 1.9.1.3. Creating or modifying the Provisioning resource You need a Provisioning resource before enabling the central infrastructure management service. Check if you have the Provisioning resource by running the following command: oc get provisioning If you already have a Provisioning resource, continue by Modifying the Provisioning resource . If you do not have a Provisioning resource, you receive a No resources found error. Continue by Creating the Provisioning resource . 1.9.1.3.1. Modifying the Provisioning resource If you already have a Provisioning resource, you must modify the resource if your hub cluster is installed on one of the following platforms: Bare metal Red Hat OpenStack Platform VMware vSphere User-provisioned infrastructure (UPI) method and the platform is None If your hub cluster is installed on a different platform, continue at Enabling central infrastructure management in disconnected environments or Enabling central infrastructure management in connected environments . Modify the Provisioning resource to allow the Bare Metal Operator to watch all namespaces by running the following command: oc patch provisioning provisioning-configuration --type merge -p '{"spec":{"watchAllNamespaces": true }}' 1.9.1.3.2. Creating the Provisioning resource If you do not have a Provisioning resource, complete the following steps: Create the Provisioning resource by adding the following YAML content: apiVersion: metal3.io/v1alpha1 kind: Provisioning metadata: name: provisioning-configuration spec: provisioningNetwork: "Disabled" watchAllNamespaces: true Apply the content by running the following command: oc apply -f 1.9.1.4. Enabling central infrastructure management in disconnected environments To enable central infrastructure management in disconnected environments, complete the following steps: Create a ConfigMap in the same namespace as your infrastructure operator to specify the values for ca-bundle.crt and registries.conf for your mirror registry. Your file ConfigMap might resemble the following example: apiVersion: v1 kind: ConfigMap metadata: name: <mirror-config> namespace: multicluster-engine labels: app: assisted-service data: ca-bundle.crt: | <certificate-content> registries.conf: | unqualified-search-registries = ["registry.access.redhat.com", "docker.io"] [[registry]] prefix = "" location = "registry.redhat.io/multicluster-engine" mirror-by-digest-only = true [[registry.mirror]] location = "mirror.registry.com:5000/multicluster-engine" Note: You must set mirror-by-digest-only to true because release images are specified by using a digest. Registries in the list of unqualified-search-registries are automatically added to an authentication ignore list in the PUBLIC_CONTAINER_REGISTRIES environment variable. The specified registries do not require authentication when the pull secret of the managed cluster is validated. Write the key pairs representing the headers and query parameters that you want to send with every osImage request. If you don't need both parameters, write key pairs for only headers or query parameters. Important: Headers and query parameters are only encrypted if you use HTTPS. Make sure to use HTTPS to avoid security issues. Create a file named headers and add content that resembles the following example: { "Authorization": "Basic xyz" } Create a file named query_params and add content that resembles the following example: { "api_key": "myexampleapikey", } Create a secret from the parameter files that you created by running the following command. If you only created one parameter file, remove the argument for the file that you didn't create: oc create secret generic -n multicluster-engine os-images-http-auth --from-file=./query_params --from-file=./headers If you want to use HTTPS osImages with a self-signed or third-party CA certificate, add the certificate to the image-service-additional-ca ConfigMap . To create a certificate, run the following command: oc -n multicluster-engine create configmap image-service-additional-ca --from-file=tls.crt Create the AgentServiceConfig custom resource by saving the following YAML content in the agent_service_config.yaml file: apiVersion: agent-install.openshift.io/v1beta1 kind: AgentServiceConfig metadata: name: agent spec: databaseStorage: accessModes: - ReadWriteOnce resources: requests: storage: <db_volume_size> filesystemStorage: accessModes: - ReadWriteOnce resources: requests: storage: <fs_volume_size> mirrorRegistryRef: name: <mirror_config> 1 unauthenticatedRegistries: - <unauthenticated_registry> 2 imageStorage: accessModes: - ReadWriteOnce resources: requests: storage: <img_volume_size> 3 OSImageAdditionalParamsRef: name: os-images-http-auth OSImageCACertRef: name: image-service-additional-ca osImages: - openshiftVersion: "<ocp_version>" 4 version: "<ocp_release_version>" 5 url: "<iso_url>" 6 cpuArchitecture: "x86_64" 1 Replace mirror_config with the name of the ConfigMap that contains your mirror registry configuration details. 2 Include the optional unauthenticated_registry parameter if you are using a mirror registry that does not require authentication. Entries on this list are not validated or required to have an entry in the pull secret. 3 Replace img_volume_size with the size of the volume for the imageStorage field, for example 10Gi per operating system image. The minimum value is 10Gi , but the recommended value is at least 50Gi . This value specifies how much storage is allocated for the images of the clusters. You need to allow 1 GB of image storage for each instance of Red Hat Enterprise Linux CoreOS that is running. You might need to use a higher value if there are many clusters and instances of Red Hat Enterprise Linux CoreOS. 4 Replace ocp_version with the OpenShift Container Platform version to install, for example, 4.14 . 5 Replace ocp_release_version with the specific install version, for example, 49.83.202103251640-0 . 6 Replace iso_url with the ISO url, for example, https://mirror.openshift.com/pub/openshift-v4/x86_64/dependencies/rhcos/4.13/4.13.3/rhcos-4.13.3-x86_64-live.x86_64.iso . You can find other values at the rhoc . If you are using HTTPS osImages with self-signed or third-party CA certificates, reference the certificate in the OSImageCACertRef spec. Important: If you are using the late binding feature and the spec.osImages releases in the AgentServiceConfig custom resource are version 4.13 or later, the OpenShift Container Platform release images that you use when creating your clusters must be the same. The Red Hat Enterprise Linux CoreOS images for version 4.13 and later are not compatible with earlier images. You can verify that your central infrastructure management service is healthy by checking the assisted-service and assisted-image-service deployments and ensuring that their pods are ready and running. 1.9.1.5. Enabling central infrastructure management in connected environments To enable central infrastructure management in connected environments, create the AgentServiceConfig custom resource by saving the following YAML content in the agent_service_config.yaml file: apiVersion: agent-install.openshift.io/v1beta1 kind: AgentServiceConfig metadata: name: agent spec: databaseStorage: accessModes: - ReadWriteOnce resources: requests: storage: <db_volume_size> 1 filesystemStorage: accessModes: - ReadWriteOnce resources: requests: storage: <fs_volume_size> 2 imageStorage: accessModes: - ReadWriteOnce resources: requests: storage: <img_volume_size> 3 1 Replace db_volume_size with the volume size for the databaseStorage field, for example 10Gi . This value specifies how much storage is allocated for storing files such as database tables and database views for the clusters. The minimum value that is required is 1Gi . You might need to use a higher value if there are many clusters. 2 Replace fs_volume_size with the size of the volume for the filesystemStorage field, for example 200M per cluster and 2-3Gi per supported OpenShift Container Platform version. The minimum value that is required is 1Gi , but the recommended value is at least 100Gi . This value specifies how much storage is allocated for storing logs, manifests, and kubeconfig files for the clusters. You might need to use a higher value if there are many clusters. 3 Replace img_volume_size with the size of the volume for the imageStorage field, for example 10Gi per operating system image. The minimum value is 10Gi , but the recommended value is at least 50Gi . This value specifies how much storage is allocated for the images of the clusters. You need to allow 1 GB of image storage for each instance of Red Hat Enterprise Linux CoreOS that is running. You might need to use a higher value if there are many clusters and instances of Red Hat Enterprise Linux CoreOS. Your central infrastructure management service is configured. You can verify that it is healthy by checking the assisted-service and assisted-image-service deployments and ensuring that their pods are ready and running. 1.9.1.6. Installing a FIPS-enabled cluster by using the Assisted Installer When you install a OpenShift Container Platform cluster that is version 4.15 and earlier and is in FIPS mode, you must specify that the installers run Red Hat Enterprise Linux (RHEL) version 8 in the AgentServiceConfig resource. When you install a OpenShift Container Platform cluster that is version 4.16 and later and is in FIPS mode, do not specify any RHEL version for the installers. Required access: You must have access to edit the AgentServiceConfig and AgentClusterInstall resources. 1.9.1.6.1. Installing a OpenShift Container Platform cluster version 4.15 and earlier If you install a OpenShift Container Platform cluster version 4.15 and earlier, complete the following steps to update the AgentServiceConfig resource: Log in to you managed cluster by using the following command: oc login Add the agent-install.openshift.io/service-image-base: el8 annotation in the AgentServiceConfig resource. Your AgentServiceConfig resource might resemble the following YAML: apiVersion: agent-install.openshift.io/v1beta1 kind: AgentServiceConfig metadata: annotations: agent-install.openshift.io/service-image-base: el8 ... 1.9.1.6.2. Installing a OpenShift Container Platform cluster version 4.16 and later If you install a OpenShift Container Platform cluster version 4.16 and later, complete the following steps to update the AgentServiceConfig resource: Log in to you managed cluster by using the following command: oc login If the agent-install.openshift.io/service-image-base: el8 annotation is present in the AgentServiceConfig resource, remove the annotation. 1.9.1.7. Additional resources For additional information about zero touch provisioning, see Challenges of the network far edge in the OpenShift Container Platform documentation. Using image pull secrets 1.9.2. Enabling central infrastructure management on Amazon Web Services If you are running your hub cluster on Amazon Web Services and want to enable the central infrastructure management service, complete the following steps after Enabling the central infrastructure management service : Make sure you are logged in at the hub cluster and find the unique domain configured on the assisted-image-service by running the following command: Your domain might resemble the following example: assisted-image-service-multicluster-engine.apps.<yourdomain>.com Make sure you are logged in at the hub cluster and create a new IngressController with a unique domain using the NLB type parameter. See the following example: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: ingress-controller-with-nlb namespace: openshift-ingress-operator spec: domain: nlb-apps.<domain>.com routeSelector: matchLabels: router-type: nlb endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: External providerParameters: type: AWS aws: type: NLB Add <yourdomain> to the domain parameter in IngressController by replacing <domain> in nlb-apps.<domain>.com with <yourdomain> . Apply the new IngressController by running the following command: Make sure that the value of the spec.domain parameter of the new IngressController is not in conflict with an existing IngressController by completing the following steps: List all IngressControllers by running the following command: Run the following command on each of the IngressControllers , except the ingress-controller-with-nlb that you just created: If the spec.domain report is missing, add a default domain that matches all of the routes that are exposed in the cluster except nlb-apps.<domain>.com . If the spec.domain report is provided, make sure that the nlb-apps.<domain>.com route is excluded from the specified range. Run the following command to edit the assisted-image-service route to use the nlb-apps location: The default namespace is where you installed the multicluster engine operator. Add the following lines to the assisted-image-service route: metadata: labels: router-type: nlb name: assisted-image-service In the assisted-image-service route, find the URL value of spec.host . The URL might resemble the following example: Replace apps in the URL with nlb-apps to match the domain configured in the new IngressController . To verify that the central infrastructure management service is enabled on Amazon Web Services, run the following command to verify that the pods are healthy: Create a new host inventory and ensure that the download URL uses the new nlb-apps URL. 1.9.3. Creating a host inventory by using the console You can create a host inventory (infrastructure environment) to discover physical or virtual machines that you can install your OpenShift Container Platform clusters on. 1.9.3.1. Prerequisites You must enable the central infrastructure management service. See Enabling the central infrastructure management service for more information. 1.9.3.2. Creating a host inventory Complete the following steps to create a host inventory by using the console: From the console, navigate to Infrastructure > Host inventory and click Create infrastructure environment . Add the following information to your host inventory settings: Name: A unique name for your infrastructure environment. Creating an infrastructure environment by using the console also creates a new namespace for the InfraEnv resource with the name you chose. If you create InfraEnv resources by using the command line interface and want to monitor the resources in the console, use the same name for your namespace and the InfraEnv . Network type: Specifies if the hosts you add to your infrastructure environment use DHCP or static networking. Static networking configuration requires additional steps. Location: Specifies the geographic location of the hosts. The geographic location can be used to define which data center the hosts are located. Labels: Optional field where you can add labels to the hosts that are discovered with this infrastructure environment. The specified location is automatically added to the list of labels. Infrastructure provider credentials: Selecting an infrastructure provider credential automatically populates the pull secret and SSH public key fields with information in the credential. For more information, see Creating a credential for an on-premises environment . Pull secret: Your OpenShift Container Platform pull secret that enables you to access the OpenShift Container Platform resources. This field is automatically populated if you selected an infrastructure provider credential. SSH public key: The SSH key that enables the secure communication with the hosts. You can use it to connect to the host for troubleshooting. After installing a cluster, you can no longer connect to the host with the SSH key. The key is generally in your id_rsa.pub file. The default file path is ~/.ssh/id_rsa.pub . This field is automatically populated if you selected an infrastructure provider credential that contains the value of a SSH public key. If you want to enable proxy settings for your hosts, select the setting to enable it and enter the following information: HTTP Proxy URL: The URL of the proxy for HTTP requests. HTTPS Proxy URL: The URL of the proxy for HTTP requests. The URL must start with HTTP. HTTPS is not supported. If you do not provide a value, your HTTP proxy URL is used by default for both HTTP and HTTPS connections. No Proxy domains: A list of domains separated by commas that you do not want to use the proxy with. Start a domain name with a period ( . ) to include all of the subdomains that are in that domain. Add an asterisk ( * ) to bypass the proxy for all destinations. Optionally add your own Network Time Protocol (NTP) sources by providing a comma separated list of IP or domain names of the NTP pools or servers. If you need advanced configuration options that are not available in the console, continue to Creating a host inventory by using the command line interface . If you do not need advanced configuration options, you can continue by configuring static networking, if required, and begin adding hosts to your infrastructure environment. 1.9.3.3. Accessing a host inventory To access a host inventory, select Infrastructure > Host inventory in the console. Select your infrastructure environment from the list to view the details and hosts. 1.9.3.4. Additional resources Enabling the central infrastructure management service Creating a credential for an on-premises environment Creating a host inventory by using the command line interface If you created a host inventory as part of the process to configure hosted control planes on bare metal, complete the following procedures: Adding hosts to the host inventory by using the Discovery Image Automatically adding bare metal hosts to the host inventory 1.9.4. Creating a host inventory by using the command line interface You can create a host inventory (infrastructure environment) to discover physical or virtual machines that you can install your OpenShift Container Platform clusters on. Use the command line interface instead of the console for automated deployments, or for the following advanced configuration options: Automatically bind discovered hosts to an existing cluster definition Override the ignition configuration of the Discovery Image Control the iPXE behavior Modify kernel arguments for the Discovery Image Pass additional certificates that you want the host to trust during the discovery phase Select a Red Hat CoreOS version to boot for testing that is not the default option of the newest version 1.9.4.1. Prerequisites You must enable the central infrastructure management service. See Enabling the central infrastructure management service for more information. 1.9.4.2. Creating a host inventory Complete the following steps to create a host inventory (infrastructure environment) by using the command line interface: Log in to your hub cluster by running the following command: Create a namespace for your resource. Create a file named, namespace.yaml , and add the following content: apiVersion: v1 kind: Namespace metadata: name: <your_namespace> 1 1 Use the same name for your namespace and your infrastructure environment to monitor your inventory in the console. Apply the YAML content by running the following command: Create a Secret custom resource containing your OpenShift Container Platform pull secret . Create the pull-secret.yaml file and add the following content: apiVersion: v1 kind: Secret type: kubernetes.io/dockerconfigjson metadata: name: pull-secret 1 namespace: <your_namespace> stringData: .dockerconfigjson: <your_pull_secret> 2 1 Add your namesapce. 2 Add your pull secret. Apply the YAML content by running the following command: Create the infrastructure environment. Create the infra-env.yaml file and add the following content. Replace values where needed: apiVersion: agent-install.openshift.io/v1beta1 kind: InfraEnv metadata: name: myinfraenv namespace: <your_namespace> spec: proxy: httpProxy: <http://user:password@ipaddr:port> httpsProxy: <http://user:password@ipaddr:port> noProxy: additionalNTPSources: sshAuthorizedKey: pullSecretRef: name: <name> agentLabels: <key>: <value> nmStateConfigLabelSelector: matchLabels: <key>: <value> clusterRef: name: <cluster_name> namespace: <project_name> ignitionConfigOverride: '{"ignition": {"version": "3.1.0"}, ...}' cpuArchitecture: x86_64 ipxeScriptType: DiscoveryImageAlways kernelArguments: - operation: append value: audit=0 additionalTrustBundle: <bundle> osImageVersion: <version> See the following field descriptions in the InfraEnv table: Table 1.7. InfraEnv field table Field Optional or required Description proxy Optional Defines the proxy settings for agents and clusters that use the InfraEnv resource. If you do not set the proxy value, agents are not configured to use a proxy. httpProxy Optional The URL of the proxy for HTTP requests. The URL must start with http . HTTPS is not supported.. httpsProxy Optional The URL of the proxy for HTTP requests. The URL must start with http . HTTPS is not supported. noProxy Optional A list of domains and CIDRs separated by commas that you do not want to use the proxy with. additionalNTPSources Optional A list of Network Time Protocol (NTP) sources (hostname or IP) to add to all hosts. They are added to NTP sources that are configured by using other options, such as DHCP. sshAuthorizedKey Optional SSH public keys that are added to all hosts for use in debugging during the discovery phase. The discovery phase is when the host boots the Discovery Image. name Required The name of the Kubernetes secret containing your pull secret. agentLabels Optional Labels that are automatically added to the Agent resources representing the hosts that are discovered with your InfraEnv . Make sure to add your key and value. nmStateConfigLabelSelector Optional Consolidates advanced network configuration such as static IPs, bridges, and bonds for the hosts. The host network configuration is specified in one or more NMStateConfig resources with labels you choose. The nmStateConfigLabelSelector property is a Kubernetes label selector that matches your chosen labels. The network configuration for all NMStateConfig labels that match this label selector is included in the Discovery Image. When you boot, each host compares each configuration to its network interfaces and applies the appropriate configuration. To learn more about advanced network configuration, see Configuring advanced networking for an infrastructure environment . clusterRef Optional References an existing ClusterDeployment resource that describes a standalone on-premises cluster. Not set by default. If clusterRef is not set, then the hosts can be bound to one or more clusters later. You can remove the host from one cluster and add it to another. If clusterRef is set, then all hosts discovered with your InfraEnv are automatically bound to the specified cluster. If the cluster is not installed yet, then all discovered hosts are part of its installation. If the cluster is already installed, then all discovered hosts are added. ignitionConfigOverride Optional Modifies the ignition configuration of the Red Hat CoreOS live image, such as adding files. Make sure to only use ignitionConfigOverride if you need it. Must use ignition version 3.1.0, regardless of the cluster version. cpuArchitecture Optional Choose one of the following supported CPU architectures: x86_64, aarch64, ppc64le, or s390x. The default value is x86_64. ipxeScriptType Optional Causes the image service to always serve the iPXE script when set to the default value of DiscoveryImageAlways and when you are using iPXE to boot. As a result, the host boots from the network discovery image. Setting the value to BootOrderControl causes the image service to decide when to return the iPXE script, depending on the host state, which causes the host to boot from the disk when the host is provisioned and is part of a cluster. kernelArguments Optional Allows modifying the kernel arguments for when the Discovery Image boots. Possible values for operation are append , replace , or delete . additionalTrustBundle Optional A PEM-encoded X.509 certificate bundle, usually needed if the hosts are in a network with a re-encrypting man-in-the-middle (MITM) proxy, or if the hosts need to trust certificates for other purposes, such as container image registries. Hosts discovered by your InfraEnv trust the certificates in this bundle. Clusters created from the hosts discovered by your InfraEnv also trust the certificates in this bundle. osImageVersion Optional The Red Hat CoreOS image version to use for your InfraEnv . Make sure the version refers to the OS image specified in either the AgentServiceConfig.spec.osImages or in the default OS images list. Each release has a specific set of Red Hat CoreOS image versions. The OSImageVersion must match an OpenShift Container Platform version in the OS images list. You cannot specify OSImageVersion and ClusterRef at the same time. If you want to use another version of the Red Hat CoreOS image that does not exist by default, then you must manually add the version by specifying it in the AgentServiceConfig.spec.osImages . To learn more about adding versions, see Enabling the central infrastructure management service . Apply the YAML content by running the following command: To verify that your host inventory is created, check the status with the following command: See the following list of notable properties: conditions : The standard Kubernetes conditions indicating if the image was created succesfully. isoDownloadURL : The URL to download the Discovery Image. createdTime : The time at which the image was last created. If you modify the InfraEnv , make sure that the timestamp has been updated before downloading a new image. Note: If you modify the InfraEnv resource, make sure that the InfraEnv has created a new Discovery Image by looking at the createdTime property. If you already booted hosts, boot them again with the latest Discovery Image. You can continue by configuring static networking, if required, and begin adding hosts to your infrastructure environment. 1.9.4.3. Additional resources Configuring advanced networking for an infrastructure environment Enabling the central infrastructure management service 1.9.5. Configuring advanced networking for an infrastructure environment For hosts that require networking beyond DHCP on a single interface, you must configure advanced networking. The required configuration includes creating one or more instances of the NMStateConfig resource that describes the networking for one or more hosts. Each NMStateConfig resource must contain a label that matches the nmStateConfigLabelSelector on your InfraEnv resource. See Creating a host inventory by using the command line interface to learn more about the nmStateConfigLabelSelector . The Discovery Image contains the network configurations defined in all referenced NMStateConfig resources. After booting, each host compares each configuration to its network interfaces and applies the appropriate configuration. 1.9.5.1. Prerequisites You must enable the central infrastructure management service. See Enabling the central infrastructure management service for more information. You must create a host inventory. See Creating a host inventory by using the console for more information. 1.9.5.2. Configuring advanced networking by using the command line interface To configure advanced networking for your infrastructure environment by using the command line interface, complete the following steps: Create a file named nmstateconfig.yaml and add content that is similar to the following template. Replace values where needed: apiVersion: agent-install.openshift.io/v1beta1 kind: NMStateConfig metadata: name: mynmstateconfig namespace: <your-infraenv-namespace> labels: some-key: <some-value> spec: config: interfaces: - name: eth0 type: ethernet state: up mac-address: 02:00:00:80:12:14 ipv4: enabled: true address: - ip: 192.168.111.30 prefix-length: 24 dhcp: false - name: eth1 type: ethernet state: up mac-address: 02:00:00:80:12:15 ipv4: enabled: true address: - ip: 192.168.140.30 prefix-length: 24 dhcp: false dns-resolver: config: server: - 192.168.126.1 routes: config: - destination: 0.0.0.0/0 -hop-address: 192.168.111.1 -hop-interface: eth1 table-id: 254 - destination: 0.0.0.0/0 -hop-address: 192.168.140.1 -hop-interface: eth1 table-id: 254 interfaces: - name: "eth0" macAddress: "02:00:00:80:12:14" - name: "eth1" macAddress: "02:00:00:80:12:15" Table 1.8. NMStateConfig field table Field Optional or required Description name Required Use a name that is relevant to the host or hosts you are configuring. namespace Required The namespace must match the namespace of your InfraEnv resource. some-key Required Add one or more labels that match the nmStateConfigLabelSelector on your InfraEnv resource. config Optional Describes the network settings in NMstate format. See Declarative Network API for the format specification and additional examples. The configuration can also apply to a single host, where you have one NMStateConfig resource per host, or can describe the interfaces for multiple hosts in a single NMStateConfig resource. interfaces Optional Describes the mapping between interface names found in the specified NMstate configuration and MAC addresses found on the hosts. Make sure the mapping uses physical interfaces present on a host. For example, when the NMState configuration defines a bond or VLAN, the mapping only contains an entry for parent interfaces. The mapping has the following purposes: * Allows you to use interface names in the configuration that do not match the interface names on a host. You might find this useful because the operating system chooses the interface names, which might not be predictable. * Tells a host what MAC addresses to look for after booting and applies the correct NMstate configuration. Note: The Image Service automatically creates a new image when you update any InfraEnv properties or change the NMStateConfig resources that match its label selector. If you add NMStateConfig resources after creating the InfraEnv resource, make sure that the InfraEnv creates a new Discovery Image by checking the createdTime property in your InfraEnv . If you already booted hosts, boot them again with the latest Discovery Image. Apply the YAML content by running the following command: 1.9.5.3. Additional resources Creating a host inventory by using the command line interface Declarative Network API 1.9.6. Adding hosts to the host inventory by using the Discovery Image After you create your host inventory (infrastructure environment), you can discover your hosts and add them to your inventory. To add hosts to your inventory, choose a method to download an ISO file and attach it to each server. For example, you can download ISO files by using a virtual media, or by writing the ISO file to a USB drive. Important: To prevent the installation from failing, keep the Discovery ISO media connected to the device during the installation process, and set each host to boot from the device one time. Prerequisites Adding hosts by using the console Adding hosts by using the command line interface Hosting iPXE artifacts with HTTP or HTTPS Additional resources 1.9.6.1. Prerequisites You must enable the central infrastructure management service. See Enabling the central infrastructure management service for more information. You must create a host inventory. See Creating a host inventory by using the console for more information. 1.9.6.2. Adding hosts by using the console Download the ISO file by completing the following steps: Select Infrastructure > Host inventory in the console. Select your infrastructure environment from the list. Click Add hosts and select With Discovery ISO . You now see a URL to download the ISO file. Booted hosts appear in the host inventory table. Hosts might take a few minutes to appear. Note: By default, the ISO that is provided is a minimal ISO. The minimal ISO does not contain the root file system, RootFS . The RootFS is downloaded later. To display full ISO, replace minimal.iso in the URL with full.iso . Approve each host so that you can use it. You can select hosts from the inventory table by clicking Actions and selecting Approve . 1.9.6.3. Adding hosts by using the command line interface The URL to download the ISO file in the isoDownloadURL property is in the status of your InfraEnv resource. See Creating a host inventory by using the command line interface for more information about the InfraEnv resource. Each booted host creates an Agent resource in the same namespace. Run the following command to view the download URL in the InfraEnv custom resource: oc get infraenv -n <infra env namespace> <infra env name> -o jsonpath='{.status.isoDownloadURL}' See the following output: Note: By default, the ISO that is provided is a minimal ISO. The minimal ISO does not contain the root file system, RootFS . The RootFS is downloaded later. To display full ISO, replace minimal.iso in the URL with full.iso . Use the URL to download the ISO file and boot your hosts with the ISO file. , you need to approve each host. See the following procedure: Run the following command to list all of your Agents : oc get agent -n <infra env namespace> You get an output that is similar to the following output: Approve any Agent from the list with a false approval status. Run the following command: oc patch agent -n <infra env namespace> <agent name> -p '{"spec":{"approved":true}}' --type merge Run the following command to confirm approval status: oc get agent -n <infra env namespace> You get an output that is similar to the following output with a true value: 1.9.6.4. Hosting iPXE artifacts with HTTP or HTTPS You can change how to host iPXE artifacts by editing the spec.iPXEHTTPRoute field in the AgentServiceConfig custom resource. Set the field to enabled to use HTTP for iPXE artifacts. Set the field to disabled to use HTTPS for iPXE artifacts. The default value is disabled . See the following example, where the spec.iPXEHTTPRoute field is set to enabled : apiVersion: agent-install.openshift.io/v1beta1 kind: AgentServiceConfig metadata: name: agent spec: iPXEHTTPRoute: enabled If you set the value to enabled , the following endpoints are exposed through HTTP: api/assisted-installer/v2/infra-envs/<id>/downloads/files?file_name=ipxe-script in assisted-service boot-artifacts/ and images/<infra-env-id>/pxe-initrd in assisted-image-service 1.9.6.5. Additional resources Creating a host inventory by using the command line interface 1.9.7. Automatically adding bare metal hosts to the host inventory After creating your infrastructure environment, you can discover your hosts and add them to your host inventory. You can automate booting the Discovery Image of your infrastructure environment by making the bare metal operator communicate with the Baseboard Management Controller (BMC) of each bare metal host. Create a BareMetalHost resource and associated BMC secret for each host. The automation is set by a label on the BareMetalHost that references your infrastructure environment. The automation performs the following actions: Boots each bare metal host with the Discovery Image represented by the infrastructure environment Reboots each host with the latest Discovery Image in case the infrastructure environment or any associated network configurations is updated Associates each Agent resource with its corresponding BareMetalHost resource upon discovery Updates Agent resource properties based on information from the BareMetalHost , such as hostname, role, and installation disk Approves the Agent for use as a cluster node 1.9.7.1. Prerequisites You must enable the central infrastructure management service. See Enabling the central infrastructure management service for more information. You must create a host inventory. See Creating a host inventory by using the console for more information. 1.9.7.2. Adding bare metal hosts by using the console Complete the following steps to automatically add bare metal hosts to your host inventory by using the console: Select Infrastructure > Host inventory in the console. Select your infrastructure environment from the list. Click Add hosts and select With BMC Form . Add the required information and click Create . To learn more about BMC address formatting, see BMC addressing in the additional resources section. 1.9.7.3. Adding bare metal hosts by using the command line interface Complete the following steps to automatically add bare metal hosts to your host inventory by using the command line interface. Create a BMC secret by applying the following YAML content and replacing values where needed: apiVersion: v1 kind: Secret metadata: name: <bmc-secret-name> namespace: <your_infraenv_namespace> 1 type: Opaque data: username: <username> password: <password> 1 The namespace must be the same as the namespace of your InfraEnv . Create a bare metal host by applying the following YAML content and replacing values where needed: apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: <bmh-name> namespace: <your-infraenv-namespace> 1 annotations: inspect.metal3.io: disabled bmac.agent-install.openshift.io/hostname: <hostname> 2 bmac.agent-install.openshift.io/role: <role> 3 labels: infraenvs.agent-install.openshift.io: <your-infraenv> 4 spec: online: true automatedCleaningMode: disabled 5 bootMACAddress: <your-mac-address> 6 bmc: address: <machine-address> 7 credentialsName: <bmc-secret-name> 8 rootDeviceHints: deviceName: /dev/sda 9 1 The namespace must be the same as the namespace of your InfraEnv . 2 Optional: Replace with the name of your host. 3 Optional: Possible values are master or worker . 4 The name must match the name of your InfrEnv and exist in the same namespace. 5 If you do not set a value, the metadata value is automatically used. 6 Make sure the MAC address matches the MAC address of one of your host interfaces. 7 Use the address of the BMC. To learn more, see Port access for the out-of-band management IP address and BMC addressing in the additional resources section. 8 Make sure that the credentialsName value matches the name of the BMC secret you created. 9 Optional: Select the installation disk. See The BareMetalHost spec for the available root device hints. After the host is booted with the Discovery Image and the corresponding Agent resource is created, the installation disk is set according to this hint. After turning on the host, the image starts downloading. This might take a few minutes. When the host is discovered, an Agent custom resource is created automatically. 1.9.7.4. Removing managed cluster nodes by using the command line interface To remove managed cluster nodes from a managed cluster, you need a hub cluster that is running on a supported OpenShift Container Platform version. Any static networking configuration required for the node to boot must be available. Make sure to not delete NMStateConfig resources when you delete the agent and bare metal host. 1.9.7.4.1. Removing managed cluster nodes with a bare metal host If you have a bare metal host on your hub cluster and want remove managed cluster nodes from a managed cluster, complete the following steps: Add the following annotation to the BareMetalHost resource of the node that you want to delete: Delete the BareMetalHost resource by running the following command. Replace <bmh-name> with the name of your BareMetalHost : 1.9.7.4.2. Removing managed cluster nodes without a bare metal host If you do not have a bare metal host on your hub cluster and you want to remove managed cluster nodes from a managed cluster, you can unbind the agent by removing the clusterDeploymentName field from the Agent specification, or delete the Agent custom resource that corresponds with the node that you are removing. If you want to delete an Agent resource from the hub cluster, but do not want the node removed from the managed cluster, you can set the annotation agent.agent-install.openshift.io/skip-spoke-cleanup to true on the Agent resource before you remove it. See the Deleting nodes instructions in the OpenShift Container Platform documentation. 1.9.7.5. Binding and unbinding hosts You can bind hosts to an Red Hat OpenShift Container Platform cluster by setting the spec.clusterDeploymentName field in the Agent custom resource, or by setting the {}bmac.agent-install.openshift.io/cluster-reference bare metal host annotation. The {}bmac.agent-install.openshift.io/cluster-reference bare metal host annotation controls the connection to your OpenShift Container Platform cluster, and binds or unbinds hosts to a specific cluster. You can use the {}bmac.agent-install.openshift.io/cluster-reference annotation in one of the following three ways: If you do not set the annotation in the bare metal host, no changes apply to the host. If you set the annotation with an empty string value, the host unbinds. If you set the annotation and use a string value that follows the <cluster-namespace>/<cluster-name> format, the host binds to the cluster that your ClusterDeployment custom resource represents. Note: If the InfraEnv that the host belongs to already contains a cluster-reference annotation, the {}bmac.agent-install.openshift.io/cluster-reference annotation is ignored. 1.9.7.6. Additional resources For additional information about zero touch provisioning, see Clusters at the network far edge in the OpenShift Container Platform documentation. To learn about the required ports for using a bare metal host, see Port access for the out-of-band management IP address in the OpenShift Container Platform documentation. To learn about root device hints, see Bare metal configuration in the OpenShift Container Platform documentation. Using image pull secrets Creating a credential for an on-premises environment To learn more about scaling compute machines, see Manually scaling a compute machine set in the OpenShift Container Platform documentation. To learn more about BMC format addressing, see BMC addressing in the OpenShift Container Platform documentation. 1.9.8. Managing your host inventory You can manage your host inventory and edit existing hosts by using the console, or by using the command line interface and editing the Agent resource. 1.9.8.1. Managing your host inventory by using the console Each host that you successfully boot with the Discovery ISO appears as a row in your host inventory. You can use the console to edit and manage your hosts. If you booted the host manually and are not using the bare metal operator automation, you must approve the host in the console before you can use it. Hosts that are ready to be installed as OpenShift nodes have the Available status. 1.9.8.2. Managing your host inventory by using the command line interface An Agent resource represents each host. You can set the following properties in an Agent resource: clusterDeploymentName Set this property to the namespace and name of the ClusterDeployment you want to use if you want to install the host as a node in a cluster. Optional: role Sets the role for the host in the cluster. Possible values are master , worker , and auto-assign . The default value is auto-assign . hostname Sets the host name for the host. Optional if the host is automatically assigned a valid host name, for example by using DHCP. approved Indicates if the host can be installed as an OpenShift node. This property is a boolean with a default value of False . If you booted the host manually and are not using the bare metal operator automation, you must set this property to True before installing the host. installation_disk_id The ID of the installation disk you chose that is visible in the inventory of the host. installerArgs A JSON-formatted string containing overrides for the coreos-installer arguments of the host. You can use this property to modify kernel arguments. See the following example syntax: ignitionConfigOverrides A JSON-formatted string containing overrides for the ignition configuration of the host. You can use this property to add files to the host by using ignition. See the following example syntax: nodeLabels A list of labels that are applied to the node after the host is installed. The status of an Agent resource has the following properties: role Sets the role for the host in the cluster. If you previously set a role in the Agent resource, the value appears in the status . inventory Contains host properties that the agent running on the host discovers. progress The host installation progress. ntpSources The configured Network Time Protocol (NTP) sources of the host. conditions Contains the following standard Kubernetes conditions with a True or False value: SpecSynced: True if all specified properties are successfully applied. False if some error was encountered. Connected: True if the agent connection to the installation service is not obstructed. False if the agent has not contacted the installation service in some time. RequirementsMet: True if the host is ready to begin the installation. Validated: True if all host validations pass. Installed: True if the host is installed as an OpenShift node. Bound: True if the host is bound to a cluster. Cleanup: False if the request to delete the Agent resouce fails. debugInfo Contains URLs for downloading installation logs and events. validationsInfo Contains information about validations that the agent runs after the host is discovered to ensure that the installation is successful. Troubleshoot if the value is False . installation_disk_id The ID of the installation disk you chose that is visible in the inventory of the host. 1.9.8.3. Additional resources Accessing a host inventory coreos-installer install 1.10. APIs You can access the following APIs for cluster lifecycle management with the multicluster engine operator. User required access: You can only perform actions that your role is assigned. Note: You can also access all APIs from the integrated console. From the local-cluster view, navigate to Home > API Explorer to explore API groups. For more information, review the API documentation for each of the following resources: Clusters API ClusterSets API (v1beta2) Clusterview API ClusterSetBindings API (v1beta2) MultiClusterEngine API Placements API (v1beta1) PlacementDecisions API (v1beta1) ManagedServiceAccount API KlusterletConfig API (v1alpha1) 1.10.1. Clusters API 1.10.1.1. Overview This documentation is for the cluster resource for multicluster engine for Kubernetes operator. Cluster resource has four possible requests: create, query, delete and update. 1.10.1.1.1. URI scheme BasePath : /kubernetes/apis Schemes : HTTPS 1.10.1.1.2. Tags cluster.open-cluster-management.io : Create and manage clusters 1.10.1.2. Paths 1.10.1.2.1. Query all clusters 1.10.1.2.1.1. Description Query your clusters for more details. 1.10.1.2.1.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string 1.10.1.2.1.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.10.1.2.1.4. Consumes cluster/yaml 1.10.1.2.1.5. Tags cluster.open-cluster-management.io 1.10.1.2.2. Create a cluster 1.10.1.2.2.1. Description Create a cluster 1.10.1.2.2.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Body body required Parameters describing the cluster to be created. Cluster 1.10.1.2.2.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.10.1.2.2.4. Consumes cluster/yaml 1.10.1.2.2.5. Tags cluster.open-cluster-management.io 1.10.1.2.2.6. Example HTTP request 1.10.1.2.2.6.1. Request body { "apiVersion" : "cluster.open-cluster-management.io/v1", "kind" : "ManagedCluster", "metadata" : { "labels" : { "vendor" : "OpenShift" }, "name" : "cluster1" }, "spec": { "hubAcceptsClient": true, "managedClusterClientConfigs": [ { "caBundle": "test", "url": "https://test.com" } ] }, "status" : { } } 1.10.1.2.3. Query a single cluster 1.10.1.2.3.1. Description Query a single cluster for more details. 1.10.1.2.3.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path cluster_name required Name of the cluster that you want to query. string 1.10.1.2.3.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.10.1.2.3.4. Tags cluster.open-cluster-management.io 1.10.1.2.4. Delete a cluster 1.10.1.2.4.1. Description Delete a single cluster 1.10.1.2.4.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path cluster_name required Name of the cluster that you want to delete. string 1.10.1.2.4.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.10.1.2.4.4. Tags cluster.open-cluster-management.io 1.10.1.3. Definitions 1.10.1.3.1. Cluster Name Schema apiVersion required string kind required string metadata required object spec required spec spec Name Schema hubAcceptsClient required bool managedClusterClientConfigs optional < managedClusterClientConfigs > array leaseDurationSeconds optional integer (int32) managedClusterClientConfigs Name Description Schema URL required string CABundle optional Pattern : "^(?:[A-Za-z0-9+/]{4})*(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?USD" string (byte) 1.10.2. Clustersets API (v1beta2) 1.10.2.1. Overview This documentation is for the Clusterset resource for multicluster engine for Kubernetes operator. Clusterset resource has four possible requests: create, query, delete and update. 1.10.2.1.1. URI scheme BasePath : /kubernetes/apis Schemes : HTTPS 1.10.2.1.2. Tags cluster.open-cluster-management.io : Create and manage Clustersets 1.10.2.2. Paths 1.10.2.2.1. Query all clustersets 1.10.2.2.1.1. Description Query your Clustersets for more details. 1.10.2.2.1.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string 1.10.2.2.1.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.10.2.2.1.4. Consumes clusterset/yaml 1.10.2.2.1.5. Tags cluster.open-cluster-management.io 1.10.2.2.2. Create a clusterset 1.10.2.2.2.1. Description Create a Clusterset. 1.10.2.2.2.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Body body required Parameters describing the clusterset to be created. Clusterset 1.10.2.2.2.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.10.2.2.2.4. Consumes clusterset/yaml 1.10.2.2.2.5. Tags cluster.open-cluster-management.io 1.10.2.2.2.6. Example HTTP request 1.10.2.2.2.6.1. Request body { "apiVersion" : "cluster.open-cluster-management.io/v1beta2", "kind" : "ManagedClusterSet", "metadata" : { "name" : "clusterset1" }, "spec": { }, "status" : { } } 1.10.2.2.3. Query a single clusterset 1.10.2.2.3.1. Description Query a single clusterset for more details. 1.10.2.2.3.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path clusterset_name required Name of the clusterset that you want to query. string 1.10.2.2.3.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.10.2.2.3.4. Tags cluster.open-cluster-management.io 1.10.2.2.4. Delete a clusterset 1.10.2.2.4.1. Description Delete a single clusterset. 1.10.2.2.4.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path clusterset_name required Name of the clusterset that you want to delete. string 1.10.2.2.4.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.10.2.2.4.4. Tags cluster.open-cluster-management.io 1.10.2.3. Definitions 1.10.2.3.1. Clusterset Name Schema apiVersion required string kind required string metadata required object 1.10.3. Clustersetbindings API (v1beta2) 1.10.3.1. Overview This documentation is for the clustersetbinding resource for multicluster engine for Kubernetes. Clustersetbinding resource has four possible requests: create, query, delete and update. 1.10.3.1.1. URI scheme BasePath : /kubernetes/apis Schemes : HTTPS 1.10.3.1.2. Tags cluster.open-cluster-management.io : Create and manage clustersetbindings 1.10.3.2. Paths 1.10.3.2.1. Query all clustersetbindings 1.10.3.2.1.1. Description Query your clustersetbindings for more details. 1.10.3.2.1.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path namespace required Namespace that you want to use, for example, default. string 1.10.3.2.1.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.10.3.2.1.4. Consumes clustersetbinding/yaml 1.10.3.2.1.5. Tags cluster.open-cluster-management.io 1.10.3.2.2. Create a clustersetbinding 1.10.3.2.2.1. Description Create a clustersetbinding. 1.10.3.2.2.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path namespace required Namespace that you want to use, for example, default. string Body body required Parameters describing the clustersetbinding to be created. Clustersetbinding 1.10.3.2.2.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.10.3.2.2.4. Consumes clustersetbinding/yaml 1.10.3.2.2.5. Tags cluster.open-cluster-management.io 1.10.3.2.2.6. Example HTTP request 1.10.3.2.2.6.1. Request body { "apiVersion" : "cluster.open-cluster-management.io/v1", "kind" : "ManagedClusterSetBinding", "metadata" : { "name" : "clusterset1", "namespace" : "ns1" }, "spec": { "clusterSet": "clusterset1" }, "status" : { } } 1.10.3.2.3. Query a single clustersetbinding 1.10.3.2.3.1. Description Query a single clustersetbinding for more details. 1.10.3.2.3.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path namespace required Namespace that you want to use, for example, default. string Path clustersetbinding_name required Name of the clustersetbinding that you want to query. string 1.10.3.2.3.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.10.3.2.3.4. Tags cluster.open-cluster-management.io 1.10.3.2.4. Delete a clustersetbinding 1.10.3.2.4.1. Description Delete a single clustersetbinding. 1.10.3.2.4.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path namespace required Namespace that you want to use, for example, default. string Path clustersetbinding_name required Name of the clustersetbinding that you want to delete. string 1.10.3.2.4.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.10.3.2.4.4. Tags cluster.open-cluster-management.io 1.10.3.3. Definitions 1.10.3.3.1. Clustersetbinding Name Schema apiVersion required string kind required string metadata required object spec required spec spec Name Schema clusterSet required string 1.10.4. Clusterview API (v1alpha1) 1.10.4.1. Overview This documentation is for the clusterview resource for multicluster engine for Kubernetes. The clusterview resource provides a CLI command that enables you to view a list of the managed clusters and managed cluster sets that that you can access. The three possible requests are: list, get, and watch. 1.10.4.1.1. URI scheme BasePath : /kubernetes/apis Schemes : HTTPS 1.10.4.1.2. Tags clusterview.open-cluster-management.io : View a list of managed clusters that your ID can access. 1.10.4.2. Paths 1.10.4.2.1. Get managed clusters 1.10.4.2.1.1. Description View a list of the managed clusters that you can access. 1.10.4.2.1.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string 1.10.4.2.1.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.10.4.2.1.4. Consumes managedcluster/yaml 1.10.4.2.1.5. Tags clusterview.open-cluster-management.io 1.10.4.2.2. List managed clusters 1.10.4.2.2.1. Description View a list of the managed clusters that you can access. 1.10.4.2.2.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Body body optional Name of the user ID for which you want to list the managed clusters. string 1.10.4.2.2.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.10.4.2.2.4. Consumes managedcluster/yaml 1.10.4.2.2.5. Tags clusterview.open-cluster-management.io 1.10.4.2.2.6. Example HTTP request 1.10.4.2.2.6.1. Request body { "apiVersion" : "clusterview.open-cluster-management.io/v1alpha1", "kind" : "ClusterView", "metadata" : { "name" : "<user_ID>" }, "spec": { }, "status" : { } } 1.10.4.2.3. Watch the managed cluster sets 1.10.4.2.3.1. Description Watch the managed clusters that you can access. 1.10.4.2.3.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path clusterview_name optional Name of the user ID that you want to watch. string 1.10.4.2.3.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.10.4.2.4. List the managed cluster sets. 1.10.4.2.4.1. Description List the managed clusters that you can access. 1.10.4.2.4.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path clusterview_name optional Name of the user ID that you want to watch. string 1.10.4.2.4.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.10.4.2.5. List the managed cluster sets. 1.10.4.2.5.1. Description List the managed clusters that you can access. 1.10.4.2.5.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path clusterview_name optional Name of the user ID that you want to watch. string 1.10.4.2.5.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.10.4.2.6. Watch the managed cluster sets. 1.10.4.2.6.1. Description Watch the managed clusters that you can access. 1.10.4.2.6.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path clusterview_name optional Name of the user ID that you want to watch. string 1.10.4.2.6.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.10.5. ManagedServiceAccount API (v1alpha1) (Deprecated) 1.10.5.1. Overview This documentation is for the ManagedServiceAccount resource for the multicluster engine operator. The ManagedServiceAccount resource has four possible requests: create, query, delete, and update. Deprecated: The v1alpha1 API is deprecated. For best results, use v1beta1 instead. 1.10.5.1.1. URI scheme BasePath : /kubernetes/apis Schemes : HTTPS 1.10.5.1.2. Tags managedserviceaccounts.authentication.open-cluster-management.io` : Create and manage ManagedServiceAccounts 1.10.5.2. Paths 1.10.5.2.1. Create a ManagedServiceAccount 1.10.5.2.1.1. Description Create a ManagedServiceAccount . 1.10.5.2.1.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Body body required Parameters describing the ManagedServiceAccount to be created. ManagedServiceAccount 1.10.5.2.1.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.10.5.2.1.4. Consumes managedserviceaccount/yaml 1.10.5.2.1.5. Tags managedserviceaccounts.authentication.open-cluster-management.io 1.10.5.2.1.5.1. Request body apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: controller-gen.kubebuilder.io/version: v0.14.0 name: managedserviceaccounts.authentication.open-cluster-management.io spec: group: authentication.open-cluster-management.io names: kind: ManagedServiceAccount listKind: ManagedServiceAccountList plural: managedserviceaccounts singular: managedserviceaccount scope: Namespaced versions: - deprecated: true deprecationWarning: authentication.open-cluster-management.io/v1alpha1 ManagedServiceAccount is deprecated; use authentication.open-cluster-management.io/v1beta1 ManagedServiceAccount; version v1alpha1 will be removed in the release name: v1alpha1 schema: openAPIV3Schema: description: ManagedServiceAccount is the Schema for the managedserviceaccounts API properties: apiVersion: description: |- APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources type: string kind: description: |- Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds type: string metadata: type: object spec: description: ManagedServiceAccountSpec defines the desired state of ManagedServiceAccount properties: rotation: description: Rotation is the policy for rotation the credentials. properties: enabled: default: true description: |- Enabled prescribes whether the ServiceAccount token will be rotated from the upstream type: boolean validity: default: 8640h0m0s description: Validity is the duration for which the signed ServiceAccount token is valid. type: string type: object ttlSecondsAfterCreation: description: |- ttlSecondsAfterCreation limits the lifetime of a ManagedServiceAccount. If the ttlSecondsAfterCreation field is set, the ManagedServiceAccount will be automatically deleted regardless of the ManagedServiceAccount's status. When the ManagedServiceAccount is deleted, its lifecycle guarantees (e.g. finalizers) will be honored. If this field is unset, the ManagedServiceAccount won't be automatically deleted. If this field is set to zero, the ManagedServiceAccount becomes eligible for deletion immediately after its creation. In order to use ttlSecondsAfterCreation, the EphemeralIdentity feature gate must be enabled. exclusiveMinimum: true format: int32 minimum: 0 type: integer required: - rotation type: object status: description: ManagedServiceAccountStatus defines the observed state of ManagedServiceAccount properties: conditions: description: Conditions is the condition list. items: description: "Condition contains details for one aspect of the current state of this API Resource.\n---\nThis struct is intended for direct use as an array at the field path .status.conditions. For example,\n\n\n\ttype FooStatus struct{\n\t // Represents the observations of a foo's current state.\n\t // Known .status.conditions.type are: \"Available\", \"Progressing\", and \"Degraded\"\n\t // +patchMergeKey=type\n\t // +patchStrategy=merge\n\t // +listType=map\n\t \ // +listMapKey=type\n\t Conditions []metav1.Condition `json:\"conditions,omitempty\" patchStrategy:\"merge\" patchMergeKey:\"type\" protobuf:\"bytes,1,rep,name=conditions\"`\n\n\n\t \ // other fields\n\t}" properties: lastTransitionTime: description: |- lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. format: date-time type: string message: description: |- message is a human readable message indicating details about the transition. This may be an empty string. maxLength: 32768 type: string observedGeneration: description: |- observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. format: int64 minimum: 0 type: integer reason: description: |- reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. maxLength: 1024 minLength: 1 pattern: ^[A-Za-z]([A-Za-z0-9_,:]*[A-Za-z0-9_])?USD type: string status: description: status of the condition, one of True, False, Unknown. enum: - "True" - "False" - Unknown type: string type: description: |- type of condition in CamelCase or in foo.example.com/CamelCase. --- Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deconflict is important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt) maxLength: 316 pattern: ^([a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*/)?(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])USD type: string required: - lastTransitionTime - message - reason - status - type type: object type: array expirationTimestamp: description: ExpirationTimestamp is the time when the token will expire. format: date-time type: string tokenSecretRef: description: |- TokenSecretRef is a reference to the corresponding ServiceAccount's Secret, which stores the CA certficate and token from the managed cluster. properties: lastRefreshTimestamp: description: |- LastRefreshTimestamp is the timestamp indicating when the token in the Secret is refreshed. format: date-time type: string name: description: Name is the name of the referenced secret. type: string required: - lastRefreshTimestamp - name type: object type: object type: object served: true storage: false subresources: status: {} - name: v1beta1 schema: openAPIV3Schema: description: ManagedServiceAccount is the Schema for the managedserviceaccounts API properties: apiVersion: description: |- APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources type: string kind: description: |- Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds type: string metadata: type: object spec: description: ManagedServiceAccountSpec defines the desired state of ManagedServiceAccount properties: rotation: description: Rotation is the policy for rotation the credentials. properties: enabled: default: true description: |- Enabled prescribes whether the ServiceAccount token will be rotated before it expires. Deprecated: All ServiceAccount tokens will be rotated before they expire regardless of this field. type: boolean validity: default: 8640h0m0s description: Validity is the duration of validity for requesting the signed ServiceAccount token. type: string type: object ttlSecondsAfterCreation: description: |- ttlSecondsAfterCreation limits the lifetime of a ManagedServiceAccount. If the ttlSecondsAfterCreation field is set, the ManagedServiceAccount will be automatically deleted regardless of the ManagedServiceAccount's status. When the ManagedServiceAccount is deleted, its lifecycle guarantees (e.g. finalizers) will be honored. If this field is unset, the ManagedServiceAccount won't be automatically deleted. If this field is set to zero, the ManagedServiceAccount becomes eligible for deletion immediately after its creation. In order to use ttlSecondsAfterCreation, the EphemeralIdentity feature gate must be enabled. exclusiveMinimum: true format: int32 minimum: 0 type: integer required: - rotation type: object status: description: ManagedServiceAccountStatus defines the observed state of ManagedServiceAccount properties: conditions: description: Conditions is the condition list. items: description: "Condition contains details for one aspect of the current state of this API Resource.\n---\nThis struct is intended for direct use as an array at the field path .status.conditions. For example,\n\n\n\ttype FooStatus struct{\n\t // Represents the observations of a foo's current state.\n\t // Known .status.conditions.type are: \"Available\", \"Progressing\", and \"Degraded\"\n\t // +patchMergeKey=type\n\t // +patchStrategy=merge\n\t // +listType=map\n\t \ // +listMapKey=type\n\t Conditions []metav1.Condition `json:\"conditions,omitempty\" patchStrategy:\"merge\" patchMergeKey:\"type\" protobuf:\"bytes,1,rep,name=conditions\"`\n\n\n\t \ // other fields\n\t}" properties: lastTransitionTime: description: |- lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. format: date-time type: string message: description: |- message is a human readable message indicating details about the transition. This may be an empty string. maxLength: 32768 type: string observedGeneration: description: |- observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. format: int64 minimum: 0 type: integer reason: description: |- reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. maxLength: 1024 minLength: 1 pattern: ^[A-Za-z]([A-Za-z0-9_,:]*[A-Za-z0-9_])?USD type: string status: description: status of the condition, one of True, False, Unknown. enum: - "True" - "False" - Unknown type: string type: description: |- type of condition in CamelCase or in foo.example.com/CamelCase. --- Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deconflict is important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt) maxLength: 316 pattern: ^([a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*/)?(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])USD type: string required: - lastTransitionTime - message - reason - status - type type: object type: array expirationTimestamp: description: ExpirationTimestamp is the time when the token will expire. format: date-time type: string tokenSecretRef: description: |- TokenSecretRef is a reference to the corresponding ServiceAccount's Secret, which stores the CA certficate and token from the managed cluster. properties: lastRefreshTimestamp: description: |- LastRefreshTimestamp is the timestamp indicating when the token in the Secret is refreshed. format: date-time type: string name: description: Name is the name of the referenced secret. type: string required: - lastRefreshTimestamp - name type: object type: object type: object served: true storage: true subresources: status: {} 1.10.5.2.2. Query a single ManagedServiceAccount 1.10.5.2.2.1. Description Query a single ManagedServiceAccount for more details. 1.10.5.2.2.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path managedserviceaccount_name required Name of the ManagedServiceAccount that you want to query. string 1.10.5.2.2.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.10.5.2.2.4. Tags managedserviceaccounts.authentication.open-cluster-management.io 1.10.5.2.3. Delete a ManagedServiceAccount 1.10.5.2.3.1. Description Delete a single ManagedServiceAccount . 1.10.5.2.3.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path managedserviceaccount_name required Name of the ManagedServiceAccount that you want to delete. string 1.10.5.2.3.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.10.5.2.3.4. Tags managedserviceaccounts.authentication.open-cluster-management.io 1.10.5.3. Definitions 1.10.5.3.1. ManagedServiceAccount Name Description Schema apiVersion required The versioned schema of the ManagedServiceAccount . string kind required String value that represents the REST resource. string metadata required The meta data of the ManagedServiceAccount . object spec required The specification of the ManagedServiceAccount . 1.10.6. MultiClusterEngine API (v1alpha1) 1.10.6.1. Overview This documentation is for the MultiClusterEngine resource for multicluster engine for Kubernetes. The MultiClusterEngine resource has four possible requests: create, query, delete, and update. 1.10.6.1.1. URI scheme BasePath : /kubernetes/apis Schemes : HTTPS 1.10.6.1.2. Tags multiclusterengines.multicluster.openshift.io : Create and manage MultiClusterEngines 1.10.6.2. Paths 1.10.6.2.1. Create a MultiClusterEngine 1.10.6.2.1.1. Description Create a MultiClusterEngine. 1.10.6.2.1.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Body body required Parameters describing the MultiClusterEngine to be created. MultiClusterEngine 1.10.6.2.1.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.10.6.2.1.4. Consumes MultiClusterEngines/yaml 1.10.6.2.1.5. Tags multiclusterengines.multicluster.openshift.io 1.10.6.2.1.5.1. Request body { "apiVersion": "apiextensions.k8s.io/v1", "kind": "CustomResourceDefinition", "metadata": { "annotations": { "controller-gen.kubebuilder.io/version": "v0.4.1" }, "creationTimestamp": null, "name": "multiclusterengines.multicluster.openshift.io" }, "spec": { "group": "multicluster.openshift.io", "names": { "kind": "MultiClusterEngine", "listKind": "MultiClusterEngineList", "plural": "multiclusterengines", "shortNames": [ "mce" ], "singular": "multiclusterengine" }, "scope": "Cluster", "versions": [ { "additionalPrinterColumns": [ { "description": "The overall state of the MultiClusterEngine", "jsonPath": ".status.phase", "name": "Status", "type": "string" }, { "jsonPath": ".metadata.creationTimestamp", "name": "Age", "type": "date" } ], "name": "v1alpha1", "schema": { "openAPIV3Schema": { "description": "MultiClusterEngine is the Schema for the multiclusterengines\nAPI", "properties": { "apiVersion": { "description": "APIVersion defines the versioned schema of this representation\nof an object. Servers should convert recognized schemas to the latest\ninternal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources", "type": "string" }, "kind": { "description": "Kind is a string value representing the REST resource this\nobject represents. Servers may infer this from the endpoint the client\nsubmits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds", "type": "string" }, "metadata": { "type": "object" }, "spec": { "description": "MultiClusterEngineSpec defines the desired state of MultiClusterEngine", "properties": { "imagePullSecret": { "description": "Override pull secret for accessing MultiClusterEngine\noperand and endpoint images", "type": "string" }, "nodeSelector": { "additionalProperties": { "type": "string" }, "description": "Set the nodeselectors", "type": "object" }, "targetNamespace": { "description": "Location where MCE resources will be placed", "type": "string" }, "tolerations": { "description": "Tolerations causes all components to tolerate any taints.", "items": { "description": "The pod this Toleration is attached to tolerates any\ntaint that matches the triple <key,value,effect> using the matching\noperator <operator>.", "properties": { "effect": { "description": "Effect indicates the taint effect to match. Empty\nmeans match all taint effects. When specified, allowed values\nare NoSchedule, PreferNoSchedule and NoExecute.", "type": "string" }, "key": { "description": "Key is the taint key that the toleration applies\nto. Empty means match all taint keys. If the key is empty,\noperator must be Exists; this combination means to match all\nvalues and all keys.", "type": "string" }, "operator": { "description": "Operator represents a key's relationship to the\nvalue. Valid operators are Exists and Equal. Defaults to Equal.\nExists is equivalent to wildcard for value, so that a pod\ncan tolerate all taints of a particular category.", "type": "string" }, "tolerationSeconds": { "description": "TolerationSeconds represents the period of time\nthe toleration (which must be of effect NoExecute, otherwise\nthis field is ignored) tolerates the taint. By default, it\nis not set, which means tolerate the taint forever (do not\nevict). Zero and negative values will be treated as 0 (evict\nimmediately) by the system.", "format": "int64", "type": "integer" }, "value": { "description": "Value is the taint value the toleration matches\nto. If the operator is Exists, the value should be empty,\notherwise just a regular string.", "type": "string" } }, "type": "object" }, "type": "array" } }, "type": "object" }, "status": { "description": "MultiClusterEngineStatus defines the observed state of MultiClusterEngine", "properties": { "components": { "items": { "description": "ComponentCondition contains condition information for\ntracked components", "properties": { "kind": { "description": "The resource kind this condition represents", "type": "string" }, "lastTransitionTime": { "description": "LastTransitionTime is the last time the condition\nchanged from one status to another.", "format": "date-time", "type": "string" }, "message": { "description": "Message is a human-readable message indicating\ndetails about the last status change.", "type": "string" }, "name": { "description": "The component name", "type": "string" }, "reason": { "description": "Reason is a (brief) reason for the condition's\nlast status change.", "type": "string" }, "status": { "description": "Status is the status of the condition. One of True,\nFalse, Unknown.", "type": "string" }, "type": { "description": "Type is the type of the cluster condition.", "type": "string" } }, "type": "object" }, "type": "array" }, "conditions": { "items": { "properties": { "lastTransitionTime": { "description": "LastTransitionTime is the last time the condition\nchanged from one status to another.", "format": "date-time", "type": "string" }, "lastUpdateTime": { "description": "The last time this condition was updated.", "format": "date-time", "type": "string" }, "message": { "description": "Message is a human-readable message indicating\ndetails about the last status change.", "type": "string" }, "reason": { "description": "Reason is a (brief) reason for the condition's\nlast status change.", "type": "string" }, "status": { "description": "Status is the status of the condition. One of True,\nFalse, Unknown.", "type": "string" }, "type": { "description": "Type is the type of the cluster condition.", "type": "string" } }, "type": "object" }, "type": "array" }, "phase": { "description": "Latest observed overall state", "type": "string" } }, "type": "object" } }, "type": "object" } }, "served": true, "storage": true, "subresources": { "status": {} } } ] }, "status": { "acceptedNames": { "kind": "", "plural": "" }, "conditions": [], "storedVersions": [] } } 1.10.6.2.2. Query all MultiClusterEngines 1.10.6.2.2.1. Description Query your multicluster engine for more details. 1.10.6.2.2.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string 1.10.6.2.2.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.10.6.2.2.4. Consumes operator/yaml 1.10.6.2.2.5. Tags multiclusterengines.multicluster.openshift.io 1.10.6.2.3. Delete a MultiClusterEngine operator 1.10.6.2.3.1. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path name required Name of the multiclusterengine that you want to delete. string 1.10.6.2.3.2. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.10.6.2.3.3. Tags multiclusterengines.multicluster.openshift.io 1.10.6.3. Definitions 1.10.6.3.1. MultiClusterEngine Name Description Schema apiVersion required The versioned schema of the MultiClusterEngines. string kind required String value that represents the REST resource. string metadata required Describes rules that define the resource. object spec required MultiClusterEngineSpec defines the desired state of MultiClusterEngine. See List of specs 1.10.6.3.2. List of specs Name Description Schema nodeSelector optional Set the nodeselectors. map[string]string imagePullSecret optional Override pull secret for accessing MultiClusterEngine operand and endpoint images. string tolerations optional Tolerations causes all components to tolerate any taints. []corev1.Toleration targetNamespace optional Location where MCE resources will be placed. string 1.10.7. Placements API (v1beta1) 1.10.7.1. Overview This documentation is for the Placement resource for multicluster engine for Kubernetes. Placement resource has four possible requests: create, query, delete and update. 1.10.7.1.1. URI scheme BasePath : /kubernetes/apis Schemes : HTTPS 1.10.7.1.2. Tags cluster.open-cluster-management.io : Create and manage Placements 1.10.7.2. Paths 1.10.7.2.1. Query all Placements 1.10.7.2.1.1. Description Query your Placements for more details. 1.10.7.2.1.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string 1.10.7.2.1.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.10.7.2.1.4. Consumes placement/yaml 1.10.7.2.1.5. Tags cluster.open-cluster-management.io 1.10.7.2.2. Create a Placement 1.10.7.2.2.1. Description Create a Placement. 1.10.7.2.2.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Body body required Parameters describing the placement to be created. Placement 1.10.7.2.2.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.10.7.2.2.4. Consumes placement/yaml 1.10.7.2.2.5. Tags cluster.open-cluster-management.io 1.10.7.2.2.6. Example HTTP request 1.10.7.2.2.6.1. Request body { "apiVersion" : "cluster.open-cluster-management.io/v1beta1", "kind" : "Placement", "metadata" : { "name" : "placement1", "namespace": "ns1" }, "spec": { "predicates": [ { "requiredClusterSelector": { "labelSelector": { "matchLabels": { "vendor": "OpenShift" } } } } ] }, "status" : { } } 1.10.7.2.3. Query a single Placement 1.10.7.2.3.1. Description Query a single Placement for more details. 1.10.7.2.3.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path placement_name required Name of the Placement that you want to query. string 1.10.7.2.3.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.10.7.2.3.4. Tags cluster.open-cluster-management.io 1.10.7.2.4. Delete a Placement 1.10.7.2.4.1. Description Delete a single Placement. 1.10.7.2.4.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path placement_name required Name of the Placement that you want to delete. string 1.10.7.2.4.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.10.7.2.4.4. Tags cluster.open-cluster-management.io 1.10.7.3. Definitions 1.10.7.3.1. Placement Name Description Schema apiVersion required The versioned schema of the Placement. string kind required String value that represents the REST resource. string metadata required The meta data of the Placement. object spec required The specification of the Placement. spec spec Name Description Schema ClusterSets optional A subset of ManagedClusterSets from which the ManagedClusters are selected. If it is empty, ManagedClusters is selected from the ManagedClusterSets that are bound to the Placement namespace. Otherwise, ManagedClusters are selected from the intersection of this subset and the ManagedClusterSets are bound to the placement namespace. string array numberOfClusters optional The desired number of ManagedClusters to be selected. integer (int32) predicates optional A subset of cluster predicates to select ManagedClusters. The conditional logic is OR . clusterPredicate array clusterPredicate Name Description Schema requiredClusterSelector optional A cluster selector to select ManagedClusters with a label and cluster claim. clusterSelector clusterSelector Name Description Schema labelSelector optional A selector of ManagedClusters by label. object claimSelector optional A selector of ManagedClusters by claim. clusterClaimSelector clusterClaimSelector Name Description Schema matchExpressions optional A subset of the cluster claim selector requirements. The conditional logic is AND . < object > array 1.10.8. PlacementDecisions API (v1beta1) 1.10.8.1. Overview This documentation is for the PlacementDecision resource for multicluster engine for Kubernetes. PlacementDecision resource has four possible requests: create, query, delete and update. 1.10.8.1.1. URI scheme BasePath : /kubernetes/apis Schemes : HTTPS 1.10.8.1.2. Tags cluster.open-cluster-management.io : Create and manage PlacementDecisions. 1.10.8.2. Paths 1.10.8.2.1. Query all PlacementDecisions 1.10.8.2.1.1. Description Query your PlacementDecisions for more details. 1.10.8.2.1.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string 1.10.8.2.1.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.10.8.2.1.4. Consumes placementdecision/yaml 1.10.8.2.1.5. Tags cluster.open-cluster-management.io 1.10.8.2.2. Create a PlacementDecision 1.10.8.2.2.1. Description Create a PlacementDecision. 1.10.8.2.2.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Body body required Parameters describing the PlacementDecision to be created. PlacementDecision 1.10.8.2.2.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.10.8.2.2.4. Consumes placementdecision/yaml 1.10.8.2.2.5. Tags cluster.open-cluster-management.io 1.10.8.2.2.6. Example HTTP request 1.10.8.2.2.6.1. Request body { "apiVersion" : "cluster.open-cluster-management.io/v1beta1", "kind" : "PlacementDecision", "metadata" : { "labels" : { "cluster.open-cluster-management.io/placement" : "placement1" }, "name" : "placement1-decision1", "namespace": "ns1" }, "status" : { } } 1.10.8.2.3. Query a single PlacementDecision 1.10.8.2.3.1. Description Query a single PlacementDecision for more details. 1.10.8.2.3.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path placementdecision_name required Name of the PlacementDecision that you want to query. string 1.10.8.2.3.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.10.8.2.3.4. Tags cluster.open-cluster-management.io 1.10.8.2.4. Delete a PlacementDecision 1.10.8.2.4.1. Description Delete a single PlacementDecision. 1.10.8.2.4.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path placementdecision_name required Name of the PlacementDecision that you want to delete. string 1.10.8.2.4.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.10.8.2.4.4. Tags cluster.open-cluster-management.io 1.10.8.3. Definitions 1.10.8.3.1. PlacementDecision Name Description Schema apiVersion required The versioned schema of PlacementDecision. string kind required String value that represents the REST resource. string metadata required The meta data of PlacementDecision. object 1.10.9. KlusterletConfig API (v1alpha1) 1.10.9.1. Overview This documentation is for the KlusterletConfig resource for the multicluster engine for Kubernetes operator. The KlusterletConfig resource is used to configure the Klusterlet installation. The four possible requests are: create, query, delete, and update. 1.10.9.1.1. URI scheme BasePath : /kubernetes/apis Schemes : HTTPS 1.10.9.1.2. Tags klusterletconfigs.config.open-cluster-management.io : Create and manage klusterletconfigs 1.10.9.2. Paths 1.10.9.2.1. Query all KlusterletConfig 1.10.9.2.1.1. Description Query all KlusterletConfig for more details. 1.10.9.2.1.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string 1.10.9.2.1.3. Responses HTTP Code Description Schema 200 Success KlusterletConfig yaml 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.10.9.2.1.4. Consumes klusterletconfig/yaml 1.10.9.2.1.5. Tags klusterletconfigs.config.open-cluster-management.io 1.10.9.2.2. Create a KlusterletConfig 1.10.9.2.2.1. Description Create a KlusterletConfig . 1.10.9.2.2.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Body body required Parameters describing the KlusterletConfig you want to create. klusterletconfig 1.10.9.2.2.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.10.9.2.2.4. Consumes klusterletconfig/yaml 1.10.9.2.2.5. Tags klusterletconfigs.config.open-cluster-management.io 1.10.9.2.2.6. Example HTTP request 1.10.9.2.2.6.1. Request body { "apiVersion": "apiextensions.k8s.io/v1", "kind": "CustomResourceDefinition", "metadata": { "annotations": { "controller-gen.kubebuilder.io/version": "v0.7.0" }, "creationTimestamp": null, "name": "klusterletconfigs.config.open-cluster-management.io" }, "spec": { "group": "config.open-cluster-management.io", "names": { "kind": "KlusterletConfig", "listKind": "KlusterletConfigList", "plural": "klusterletconfigs", "singular": "klusterletconfig" }, "preserveUnknownFields": false, "scope": "Cluster", "versions": [ { "name": "v1alpha1", "schema": { "openAPIV3Schema": { "description": "KlusterletConfig contains the configuration of a klusterlet including the upgrade strategy, config overrides, proxy configurations etc.", "properties": { "apiVersion": { "description": "APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources", "type": "string" }, "kind": { "description": "Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds", "type": "string" }, "metadata": { "type": "object" }, "spec": { "description": "Spec defines the desired state of KlusterletConfig", "properties": { "appliedManifestWorkEvictionGracePeriod": { "description": "AppliedManifestWorkEvictionGracePeriod is the eviction grace period the work agent will wait before evicting the AppliedManifestWorks, whose corresponding ManifestWorks are missing on the hub cluster, from the managed cluster. If not present, the default value of the work agent will be used. If its value is set to \"INFINITE\", it means the AppliedManifestWorks will never been evicted from the managed cluster.", "pattern": "^([0-9]+(s|m|h))+USD|^INFINITEUSD", "type": "string" }, "bootstrapKubeConfigs": { "description": "BootstrapKubeConfigSecrets is the list of secrets that reflects the Klusterlet.Spec.RegistrationConfiguration.BootstrapKubeConfigs.", "properties": { "localSecretsConfig": { "description": "LocalSecretsConfig include a list of secrets that contains the kubeconfigs for ordered bootstrap kubeconifigs. The secrets must be in the same namespace where the agent controller runs.", "properties": { "hubConnectionTimeoutSeconds": { "default": 600, "description": "HubConnectionTimeoutSeconds is used to set the timeout of connecting to the hub cluster. When agent loses the connection to the hub over the timeout seconds, the agent do a rebootstrap. By default is 10 mins.", "format": "int32", "minimum": 180, "type": "integer" }, "kubeConfigSecrets": { "description": "KubeConfigSecrets is a list of secret names. The secrets are in the same namespace where the agent controller runs.", "items": { "properties": { "name": { "description": "Name is the name of the secret.", "type": "string" } }, "type": "object" }, "type": "array" } }, "type": "object" }, "type": { "default": "None", "description": "Type specifies the type of priority bootstrap kubeconfigs. By default, it is set to None, representing no priority bootstrap kubeconfigs are set.", "enum": [ "None", "LocalSecrets" ], "type": "string" } }, "type": "object" }, "hubKubeAPIServerCABundle": { "description": "HubKubeAPIServerCABundle is the CA bundle to verify the server certificate of the hub kube API against. If not present, CA bundle will be determined with the logic below: 1). Use the certificate of the named certificate configured in APIServer/cluster if FQDN matches; 2). Otherwise use the CA certificates from kube-root-ca.crt ConfigMap in the cluster namespace; \n Deprecated and maintained for backward compatibility, use HubKubeAPIServerConfig.ServerVarificationStrategy and HubKubeAPIServerConfig.TrustedCABundles instead", "format": "byte", "type": "string" }, "hubKubeAPIServerConfig": { "description": "HubKubeAPIServerConfig specifies the settings required for connecting to the hub Kube API server. If this field is present, the below deprecated fields will be ignored: - HubKubeAPIServerProxyConfig - HubKubeAPIServerURL - HubKubeAPIServerCABundle", "properties": { "proxyURL": { "description": "ProxyURL is the URL to the proxy to be used for all requests made by client If an HTTPS proxy server is configured, you may also need to add the necessary CA certificates to TrustedCABundles.", "type": "string" }, "serverVerificationStrategy": { "description": "ServerVerificationStrategy is the strategy used for verifying the server certification; The value could be \"UseSystemTruststore\", \"UseAutoDetectedCABundle\", \"UseCustomCABundles\", empty. \n When this strategy is not set or value is empty; if there is only one klusterletConfig configured for a cluster, the strategy is eaual to \"UseAutoDetectedCABundle\", if there are more than one klusterletConfigs, the empty strategy will be overrided by other non-empty strategies.", "enum": [ "UseSystemTruststore", "UseAutoDetectedCABundle", "UseCustomCABundles" ], "type": "string" }, "trustedCABundles": { "description": "TrustedCABundles refers to a collection of user-provided CA bundles used for verifying the server certificate of the hub Kubernetes API If the ServerVerificationStrategy is set to \"UseSystemTruststore\", this field will be ignored. Otherwise, the CA certificates from the configured bundles will be appended to the klusterlet CA bundle.", "items": { "description": "CABundle is a user-provided CA bundle", "properties": { "caBundle": { "description": "CABundle refers to a ConfigMap with label \"import.open-cluster-management.io/ca-bundle\" containing the user-provided CA bundle The key of the CA data could be \"ca-bundle.crt\", \"ca.crt\", or \"tls.crt\".", "properties": { "name": { "description": "name is the metadata.name of the referenced config map", "type": "string" }, "namespace": { "description": "name is the metadata.namespace of the referenced config map", "type": "string" } }, "required": [ "name", "namespace" ], "type": "object" }, "name": { "description": "Name is the identifier used to reference the CA bundle; Do not use \"auto-detected\" as the name since it is the reserved name for the auto-detected CA bundle.", "type": "string" } }, "required": [ "caBundle", "name" ], "type": "object" }, "type": "array", "x-kubernetes-list-map-keys": [ "name" ], "x-kubernetes-list-type": "map" }, "url": { "description": "URL is the endpoint of the hub Kube API server. If not present, the .status.apiServerURL of Infrastructure/cluster will be used as the default value. e.g. `oc get infrastructure cluster -o jsonpath='{.status.apiServerURL}'`", "type": "string" } }, "type": "object" }, "hubKubeAPIServerProxyConfig": { "description": "HubKubeAPIServerProxyConfig holds proxy settings for connections between klusterlet/add-on agents on the managed cluster and the kube-apiserver on the hub cluster. Empty means no proxy settings is available. \n Deprecated and maintained for backward compatibility, use HubKubeAPIServerConfig.ProxyURL instead", "properties": { "caBundle": { "description": "CABundle is a CA certificate bundle to verify the proxy server. It will be ignored if only HTTPProxy is set; And it is required when HTTPSProxy is set and self signed CA certificate is used by the proxy server.", "format": "byte", "type": "string" }, "httpProxy": { "description": "HTTPProxy is the URL of the proxy for HTTP requests", "type": "string" }, "httpsProxy": { "description": "HTTPSProxy is the URL of the proxy for HTTPS requests HTTPSProxy will be chosen if both HTTPProxy and HTTPSProxy are set.", "type": "string" } }, "type": "object" }, "hubKubeAPIServerURL": { "description": "HubKubeAPIServerURL is the URL of the hub Kube API server. If not present, the .status.apiServerURL of Infrastructure/cluster will be used as the default value. e.g. `oc get infrastructure cluster -o jsonpath='{.status.apiServerURL}'` \n Deprecated and maintained for backward compatibility, use HubKubeAPIServerConfig.URL instead", "type": "string" }, "installMode": { "description": "InstallMode is the mode to install the klusterlet", "properties": { "noOperator": { "description": "NoOperator is the setting of klusterlet installation when install type is noOperator.", "properties": { "postfix": { "description": "Postfix is the postfix of the klusterlet name. The name of the klusterlet is \"klusterlet\" if it is not set, and \"klusterlet-{Postfix}\". The install namespace is \"open-cluster-management-agent\" if it is not set, and \"open-cluster-management-{Postfix}\".", "maxLength": 33, "pattern": "^[-a-z0-9]*[a-z0-9]USD", "type": "string" } }, "type": "object" }, "type": { "default": "default", "description": "InstallModeType is the type of install mode.", "enum": [ "default", "noOperator" ], "type": "string" } }, "type": "object" }, "nodePlacement": { "description": "NodePlacement enables explicit control over the scheduling of the agent components. If the placement is nil, the placement is not specified, it will be omitted. If the placement is an empty object, the placement will match all nodes and tolerate nothing.", "properties": { "nodeSelector": { "additionalProperties": { "type": "string" }, "description": "NodeSelector defines which Nodes the Pods are scheduled on. The default is an empty list.", "type": "object" }, "tolerations": { "description": "Tolerations are attached by pods to tolerate any taint that matches the triple <key,value,effect> using the matching operator <operator>. The default is an empty list.", "items": { "description": "The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>.", "properties": { "effect": { "description": "Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute.", "type": "string" }, "key": { "description": "Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys.", "type": "string" }, "operator": { "description": "Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category.", "type": "string" }, "tolerationSeconds": { "description": "TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system.", "format": "int64", "type": "integer" }, "value": { "description": "Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string.", "type": "string" } }, "type": "object" }, "type": "array" } }, "type": "object" }, "pullSecret": { "description": "PullSecret is the name of image pull secret.", "properties": { "apiVersion": { "description": "API version of the referent.", "type": "string" }, "fieldPath": { "description": "If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: \"spec.containers{name}\" (where \"name\" refers to the name of the container that triggered the event) or if no container name is specified \"spec.containers[2]\" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future.", "type": "string" }, "kind": { "description": "Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds", "type": "string" }, "name": { "description": "Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names", "type": "string" }, "namespace": { "description": "Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/", "type": "string" }, "resourceVersion": { "description": "Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency", "type": "string" }, "uid": { "description": "UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids", "type": "string" } }, "type": "object", "x-kubernetes-map-type": "atomic" }, "registries": { "description": "Registries includes the mirror and source registries. The source registry will be replaced by the Mirror.", "items": { "properties": { "mirror": { "description": "Mirror is the mirrored registry of the Source. Will be ignored if Mirror is empty.", "type": "string" }, "source": { "description": "Source is the source registry. All image registries will be replaced by Mirror if Source is empty.", "type": "string" } }, "required": [ "mirror" ], "type": "object" }, "type": "array" } }, "type": "object" }, "status": { "description": "Status defines the observed state of KlusterletConfig", "type": "object" } }, "type": "object" } }, "served": true, "storage": true, "subresources": { "status": {} } } ] }, "status": { "acceptedNames": { "kind": "", "plural": "" }, "conditions": [], "storedVersions": [] } } 1.10.9.2.3. Query a single klusterletconfig 1.10.9.2.3.1. Description Query a single KlusterletConfig for more details. 1.10.9.2.3.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path klusterletconfig_name required Name of the klusterletconfig that you want to query. string 1.10.9.2.3.3. Responses HTTP Code Description Schema 200 Success KlusterletConfig yaml 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.10.9.2.3.4. Tags klusterletconfigs.config.open-cluster-management.io 1.10.9.2.4. Delete a klusterletconfig 1.10.9.2.4.1. Description Delete a single KlusterletConfig . 1.10.9.2.4.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path klusterletconfig_name required Name of the klusterletconfig that you want to delete. string 1.10.9.2.4.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.10.9.2.4.4. Tags klusterletconfig.authentication.open-cluster-management.io 1.10.9.3. Definitions 1.10.9.3.1. klusterletconfig Name Description Schema apiVersion required The versioned schema of the klusterletconfig. string kind required String value that represents the REST resource. string metadata required The meta data of the KlusterletConfig . object spec required The specification of the KlusterletConfig . 1.11. Troubleshooting Before using the Troubleshooting guide, you can run the oc adm must-gather command to gather details, logs, and take steps in debugging issues. For more details, see Running the must-gather command to troubleshoot . Additionally, check your role-based access. See multicluster engine operator Role-based access control for details. 1.11.1. Documented troubleshooting View the list of troubleshooting topics for the multicluster engine operator: Installation: To view the main documentation for the installing tasks, see Installing and upgrading multicluster engine operator . Troubleshooting installation status stuck in installing or pending Troubleshooting reinstallation failure Cluster management: To view the main documentation about managing your clusters, see Cluster lifecycle introduction . Troubleshooting adding day-two nodes to an existing cluster fails with pending user action Troubleshooting an offline cluster Troubleshooting a managed cluster import failure Reimporting cluster fails with unknown authority error Troubleshooting cluster with pending import status Troubleshooting imported clusters offline after certificate change Troubleshooting cluster status changing from offline to available Troubleshooting cluster creation on VMware vSphere Troubleshooting cluster in console with pending or failed status Troubleshooting Klusterlet with degraded conditions Namespace remains after deleting a cluster Auto-import-secret-exists error when importing a cluster Troubleshooting missing PlacementDecision after creating Placement Troubleshooting a discovery failure of bare metal hosts on Dell hardware Troubleshooting Minimal ISO boot failures Troubleshooting managed clusters Unknown on OpenShift Service on AWS with hosted control planes cluster Troubleshooting an attempt to upgrade managed cluster with missing OpenShift Container Platform version 1.11.2. Running the must-gather command to troubleshoot To get started with troubleshooting, learn about the troubleshooting scenarios for users to run the must-gather command to debug the issues, then see the procedures to start using the command. Required access: Cluster administrator 1.11.2.1. Must-gather scenarios Scenario one: Use the Documented troubleshooting section to see if a solution to your problem is documented. The guide is organized by the major functions of the product. With this scenario, you check the guide to see if your solution is in the documentation. Scenario two: If your problem is not documented with steps to resolve, run the must-gather command and use the output to debug the issue. Scenario three: If you cannot debug the issue using your output from the must-gather command, then share your output with Red Hat Support. 1.11.2.2. Must-gather procedure See the following procedure to start using the must-gather command: Learn about the must-gather command and install the prerequisites that you need at Gathering data about your cluster in the OpenShift Container Platform documentation. Log in to your cluster. For the usual use-case, you should run the must-gather while you are logged into your engine cluster. Note: If you want to check your managed clusters, find the gather-managed.log file that is located in the cluster-scoped-resources directory: Check for managed clusters that are not set True for the JOINED and AVAILABLE column. You can run the must-gather command on those clusters that are not connected with True status. Add the multicluster engine for Kubernetes image that is used for gathering data and the directory. Run the following command: Go to your specified directory to see your output, which is organized in the following levels: Two peer levels: cluster-scoped-resources and namespace resources. Sub-level for each: API group for the custom resource definitions for both cluster-scope and namespace-scoped resources. level for each: YAML file sorted by kind . 1.11.2.3. Must-gather in a disconnected environment Complete the following steps to run the must-gather command in a disconnected environment: In a disconnected environment, mirror the Red Hat operator catalog images into their mirror registry. For more information, see Install on disconnected networks . Run the following command to extract logs, which reference the image from their mirror registry. Replace sha256 with the current image: You can open a Jira bug for the product team here . Running the must-gather command to troubleshoot 1.11.3. Troubleshooting: Adding day-two nodes to an existing cluster fails with pending user action Adding a node, or scaling out, to your existing cluster that is created by the multicluster engine for Kubernetes operator with Zero Touch Provisioning or Host inventory create methods fails during installation. The installation process works correctly during the Discovery phase, but fails on the installation phase. The configuration of the network is failing. From the hub cluster in the integrated console, you see a Pending user action. In the description, you can see it failing on the rebooting step. The error message about failing is not very accurate, since the agent that is running in the installing host cannot report information. 1.11.3.1. Symptom: Installation for day two workers fails After the Discover phase, the host reboots to continue the installation, but it cannot configure the network. Check for the following symptoms and messages: From the hub cluster in the integrated console, check for Pending user action on the adding node, with the Rebooting indicator: From the Red Hat OpenShift Container Platform configuration managed cluster, check the MachineConfigs of the existing cluster. Check if any of the MachineConfigs create any file on the following directories: /sysroot/etc/NetworkManager/system-connections/ /sysroot/etc/sysconfig/network-scripts/ From the terminal of the installing host, check the failing host for the following messages. You can use journalctl to see the log messages: If you get the last message in the log, the networking configuration is not propagated because it already found an existing network configuration on the folders previously listed in the Symptom . 1.11.3.2. Resolving the problem: Recreate the node merging network configuration Perform the following task to use a proper network configuration during the installation: Delete the node from your hub cluster. Repeat your process to install the node in the same way. Create the BareMetalHost object of the node with the following annotation: "bmac.agent-install.openshift.io/installer-args": "[\"--append-karg\", \"coreos.force_persist_ip\"]" The node starts the installation. After the Discovery phase, the node merges the network configuration between the changes on the existing cluster and the initial configuration. 1.11.4. Troubleshooting deletion failure of a hosted control plane cluster on the Agent platform When you destroy a hosted control plane cluster on the Agent platform, all the back-end resources are normally deleted. If the machine resources are not deleted properly, a cluster deletion fails. In that case, you must manually remove the remaining machine resources. 1.11.4.1. Symptom: An error occurs when destroying a hosted control plane cluster After you attempt to destroy the hosted control plane cluster on the Agent platform, the hcp destroy command fails with the following error: + 1.11.4.2. Resolving the problem: Remove the remaining machine resources manually Complete the following steps to destroy a hosted control plane cluster successfully on the Agent platform: Run the following command to see the list of remaining machine resources by replacing <hosted_cluster_namespace> with the name of hosted cluster namespace: See the following example output: Run the following command to remove the machine.cluster.x-k8s.io finalizer attached to machine resources: Run the following command to verify you receive the No resources found message on your terminal: Run the following command to destroy a hosted control plane cluster on the Agent platform: Replace <cluster_name> with the name of your cluster. 1.11.5. Troubleshooting installation status stuck in installing or pending When installing the multicluster engine operator, the MultiClusterEngine remains in Installing phase, or multiple pods maintain a Pending status. 1.11.5.1. Symptom: Stuck in Pending status More than ten minutes passed since you installed MultiClusterEngine and one or more components from the status.components field of the MultiClusterEngine resource report ProgressDeadlineExceeded . Resource constraints on the cluster might be the issue. Check the pods in the namespace where MultiClusterEngine was installed. You might see Pending with a status similar to the following: In this case, the worker nodes resources are not sufficient in the cluster to run the product. 1.11.5.2. Resolving the problem: Adjust worker node sizing If you have this problem, then your cluster needs to be updated with either larger or more worker nodes. See Sizing your cluster for guidelines on sizing your cluster. 1.11.6. Troubleshooting reinstallation failure When reinstalling multicluster engine operator, the pods do not start. 1.11.6.1. Symptom: Reinstallation failure If your pods do not start after you install the multicluster engine operator, it is often because items from a installation of multicluster engine operator were not removed correctly when it was uninstalled. In this case, the pods do not start after completing the installation process. 1.11.6.2. Resolving the problem: Reinstallation failure If you have this problem, complete the following steps: Run the uninstallation process to remove the current components by following the steps in Uninstalling . Install the Helm CLI binary version 3.2.0, or later, by following the instructions at Installing Helm . Ensure that your Red Hat OpenShift Container Platform CLI is configured to run oc commands. See Getting started with the OpenShift CLI in the OpenShift Container Platform documentation for more information about how to configure the oc commands. Copy the following script into a file: Replace <namespace> in the script with the name of the namespace where multicluster engine operator was installed. Ensure that you specify the correct namespace, as the namespace is cleaned out and deleted. Run the script to remove the artifacts from the installation. Run the installation. See Installing while connected online . 1.11.7. Troubleshooting an offline cluster There are a few common causes for a cluster showing an offline status. 1.11.7.1. Symptom: Cluster status is offline After you complete the procedure for creating a cluster, you cannot access it from the Red Hat Advanced Cluster Management console, and it shows a status of offline . 1.11.7.2. Resolving the problem: Cluster status is offline Determine if the managed cluster is available. You can check this in the Clusters area of the Red Hat Advanced Cluster Management console. If it is not available, try restarting the managed cluster. If the managed cluster status is still offline, complete the following steps: Run the oc get managedcluster <cluster_name> -o yaml command on the hub cluster. Replace <cluster_name> with the name of your cluster. Find the status.conditions section. Check the messages for type: ManagedClusterConditionAvailable and resolve any problems. 1.11.8. Troubleshooting a managed cluster import failure If your cluster import fails, there are a few steps that you can take to determine why the cluster import failed. 1.11.8.1. Symptom: Imported cluster not available After you complete the procedure for importing a cluster, you cannot access it from the console. 1.11.8.2. Resolving the problem: Imported cluster not available There can be a few reasons why an imported cluster is not available after an attempt to import it. If the cluster import fails, complete the following steps, until you find the reason for the failed import: On the hub cluster, run the following command to ensure that the import controller is running. You should see two pods that are running. If either of the pods is not running, run the following command to view the log to determine the reason: On the hub cluster, run the following command to determine if the managed cluster import secret was generated successfully by the import controller: If the import secret does not exist, run the following command to view the log entries for the import controller and determine why it was not created: On the hub cluster, if your managed cluster is local-cluster , provisioned by Hive, or has an auto-import secret, run the following command to check the import status of the managed cluster. If the condition ManagedClusterImportSucceeded is not true , the result of the command indicates the reason for the failure. Check the Klusterlet status of the managed cluster for a degraded condition. See Troubleshooting Klusterlet with degraded conditions to find the reason that the Klusterlet is degraded. 1.11.9. Reimporting cluster fails with unknown authority error If you experience a problem when reimporting a managed cluster to your multicluster engine operator hub cluster, follow the procedure to troubleshoot the problem. 1.11.9.1. Symptom: Reimporting cluster fails with unknown authority error After you provision an OpenShift Container Platform cluster with multicluster engine operator, reimporting the cluster might fail with a x509: certificate signed by unknown authority error when you change or add API server certificates to your OpenShift Container Platform cluster. 1.11.9.2. Identifying the problem: Reimporting cluster fails with unknown authority error After failing to reimport your managed cluster, run the following command to get the import controller log on your multicluster engine operator hub cluster: If the following error log appears, your managed cluster API server certificates might have changed: ERROR Reconciler error {"controller": "clusterdeployment-controller", "object": {"name":"awscluster1","namespace":"awscluster1"}, "namespace": "awscluster1", "name": "awscluster1", "reconcileID": "a2cccf24-2547-4e26-95fb-f258a6710d80", "error": "Get \"https://api.awscluster1.dev04.red-chesterfield.com:6443/api?timeout=32s\": x509: certificate signed by unknown authority"} To determine if your managed cluster API server certificates have changed, complete the following steps: Run the following command to specify your managed cluster name by replacing your-managed-cluster-name with the name of your managed cluster: Get your managed cluster kubeconfig secret name by running the following command: Export kubeconfig to a new file by running the following commands: Get the namespace from your managed cluster with kubeconfig by running the following command: If you receive an error that resembles the following message, your cluster API server ceritificates have been changed and your kubeconfig file is invalid. Unable to connect to the server: x509: certificate signed by unknown authority 1.11.9.3. Resolving the problem: Reimporting cluster fails with unknown authority error The managed cluster administrator must create a new valid kubeconfig file for your managed cluster. After creating a new kubeconfig , complete the following steps to update the new kubeconfig for your managed cluster: Run the following commands to set your kubeconfig file path and cluster name. Replace <path_to_kubeconfig> with the path to your new kubeconfig file. Replace <managed_cluster_name> with the name of your managed cluster: Run the following command to encode your new kubeconfig : Note: On macOS, run the following command instead: Run the following command to define the kubeconfig json patch: Retrieve your administrator kubeconfig secret name from your managed cluster by running the following command: Patch your administrator kubeconfig secret with your new kubeconfig by running the following command: 1.11.10. Troubleshooting cluster with pending import status If you receive Pending import continually on the console of your cluster, follow the procedure to troubleshoot the problem. 1.11.10.1. Symptom: Cluster with pending import status After importing a cluster by using the Red Hat Advanced Cluster Management console, the cluster appears in the console with a status of Pending import . 1.11.10.2. Identifying the problem: Cluster with pending import status Run the following command on the managed cluster to view the Kubernetes pod names that are having the issue: Run the following command on the managed cluster to find the log entry for the error: Replace registration_agent_pod with the pod name that you identified in step 1. Search the returned results for text that indicates there was a networking connectivity problem. Example includes: no such host . 1.11.10.3. Resolving the problem: Cluster with pending import status Retrieve the port number that is having the problem by entering the following command on the hub cluster: Ensure that the hostname from the managed cluster can be resolved, and that outbound connectivity to the host and port is occurring. If the communication cannot be established by the managed cluster, the cluster import is not complete. The cluster status for the managed cluster is Pending import . 1.11.11. Troubleshooting imported clusters offline after certificate change Installing a custom apiserver certificate is supported, but one or more clusters that were imported before you changed the certificate information can have an offline status. 1.11.11.1. Symptom: Clusters offline after certificate change After you complete the procedure for updating a certificate secret, one or more of your clusters that were online are now displaying an offline status in the console. 1.11.11.2. Identifying the problem: Clusters offline after certificate change After updating the information for a custom API server certificate, clusters that were imported and running before the new certificate are now in an offline state. The errors that indicate that the certificate is the problem are found in the logs for the pods in the open-cluster-management-agent namespace of the offline managed cluster. The following examples are similar to the errors that are displayed in the logs: See the following work-agent log: See the following registration-agent log: 1.11.11.3. Resolving the problem: Clusters offline after certificate change If your managed cluster is the local-cluster or your managed cluster was created by multicluster engine operator, you must wait 10 minutes or longer to recover your managed cluster. To recover your managed cluster immediately, you can delete your managed cluster import secret on the hub cluster and recover it by using multicluster engine operator. Run the following command: Replace <cluster_name> with the name of the managed cluster that you want to recover. If you want to recover a managed cluster that was imported by using multicluster engine operator, complete the following steps import the managed cluster again: On the hub cluster, recreate the managed cluster import secret by running the following command: Replace <cluster_name> with the name of the managed cluster that you want to import. On the hub cluster, expose the managed cluster import secret to a YAML file by running the following command: Replace <cluster_name> with the name of the managed cluster that you want to import. On the managed cluster, apply the import.yaml file by running the following command: Note: The steps do not detach the managed cluster from the hub cluster. The steps update the required manifests with current settings on the managed cluster, including the new certificate information. 1.11.12. Troubleshooting cluster status changing from offline to available The status of the managed cluster alternates between offline and available without any manual change to the environment or cluster. 1.11.12.1. Symptom: Cluster status changing from offline to available When the network that connects the managed cluster to the hub cluster is unstable, the status of the managed cluster that is reported by the hub cluster cycles between offline and available . 1.11.12.2. Resolving the problem: Cluster status changing from offline to available To attempt to resolve this issue, complete the following steps: Edit your ManagedCluster specification on the hub cluster by entering the following command: Replace cluster-name with the name of your managed cluster. Increase the value of leaseDurationSeconds in your ManagedCluster specification. The default value is 5 minutes, but that might not be enough time to maintain the connection with the network issues. Specify a greater amount of time for the lease. For example, you can raise the setting to 20 minutes. 1.11.13. Troubleshooting cluster creation on VMware vSphere If you experience a problem when creating a Red Hat OpenShift Container Platform cluster on VMware vSphere, see the following troubleshooting information to see if one of them addresses your problem. Note: Sometimes when the cluster creation process fails on VMware vSphere, the link is not enabled for you to view the logs. If this happens, you can identify the problem by viewing the log of the hive-controllers pod. The hive-controllers log is in the hive namespace. 1.11.13.1. Managed cluster creation fails with certificate IP SAN error 1.11.13.1.1. Symptom: Managed cluster creation fails with certificate IP SAN error After creating a new Red Hat OpenShift Container Platform cluster on VMware vSphere, the cluster fails with an error message that indicates a certificate IP SAN error. 1.11.13.1.2. Identifying the problem: Managed cluster creation fails with certificate IP SAN error The deployment of the managed cluster fails and returns the following errors in the deployment log: 1.11.13.1.3. Resolving the problem: Managed cluster creation fails with certificate IP SAN error Use the VMware vCenter server fully-qualified host name instead of the IP address in the credential. You can also update the VMware vCenter CA certificate to contain the IP SAN. 1.11.13.2. Managed cluster creation fails with unknown certificate authority 1.11.13.2.1. Symptom: Managed cluster creation fails with unknown certificate authority After creating a new Red Hat OpenShift Container Platform cluster on VMware vSphere, the cluster fails because the certificate is signed by an unknown authority. 1.11.13.2.2. Identifying the problem: Managed cluster creation fails with unknown certificate authority The deployment of the managed cluster fails and returns the following errors in the deployment log: 1.11.13.2.3. Resolving the problem: Managed cluster creation fails with unknown certificate authority Ensure you entered the correct certificate from the certificate authority when creating the credential. 1.11.13.3. Managed cluster creation fails with expired certificate 1.11.13.3.1. Symptom: Managed cluster creation fails with expired certificate After creating a new Red Hat OpenShift Container Platform cluster on VMware vSphere, the cluster fails because the certificate is expired or is not yet valid. 1.11.13.3.2. Identifying the problem: Managed cluster creation fails with expired certificate The deployment of the managed cluster fails and returns the following errors in the deployment log: 1.11.13.3.3. Resolving the problem: Managed cluster creation fails with expired certificate Ensure that the time on your ESXi hosts is synchronized. 1.11.13.4. Managed cluster creation fails with insufficient privilege for tagging 1.11.13.4.1. Symptom: Managed cluster creation fails with insufficient privilege for tagging After creating a new Red Hat OpenShift Container Platform cluster on VMware vSphere, the cluster fails because there is insufficient privilege to use tagging. 1.11.13.4.2. Identifying the problem: Managed cluster creation fails with insufficient privilege for tagging The deployment of the managed cluster fails and returns the following errors in the deployment log: 1.11.13.4.3. Resolving the problem: Managed cluster creation fails with insufficient privilege for tagging Ensure that your VMware vCenter required account privileges are correct. See Image registry removed during information for more information. 1.11.13.5. Managed cluster creation fails with invalid dnsVIP 1.11.13.5.1. Symptom: Managed cluster creation fails with invalid dnsVIP After creating a new Red Hat OpenShift Container Platform cluster on VMware vSphere, the cluster fails because there is an invalid dnsVIP. 1.11.13.5.2. Identifying the problem: Managed cluster creation fails with invalid dnsVIP If you see the following message when trying to deploy a new managed cluster with VMware vSphere, it is because you have an older OpenShift Container Platform release image that does not support VMware Installer Provisioned Infrastructure (IPI): 1.11.13.5.3. Resolving the problem: Managed cluster creation fails with invalid dnsVIP Select a release image from a later version of OpenShift Container Platform that supports VMware Installer Provisioned Infrastructure. 1.11.13.6. Managed cluster creation fails with incorrect network type 1.11.13.6.1. Symptom: Managed cluster creation fails with incorrect network type After creating a new Red Hat OpenShift Container Platform cluster on VMware vSphere, the cluster fails because there is an incorrect network type specified. 1.11.13.6.2. Identifying the problem: Managed cluster creation fails with incorrect network type If you see the following message when trying to deploy a new managed cluster with VMware vSphere, it is because you have an older OpenShift Container Platform image that does not support VMware Installer Provisioned Infrastructure (IPI): 1.11.13.6.3. Resolving the problem: Managed cluster creation fails with incorrect network type Select a valid VMware vSphere network type for the specified VMware cluster. 1.11.13.7. Managed cluster creation fails with an error processing disk changes 1.11.13.7.1. Symptom: Adding the VMware vSphere managed cluster fails due to an error processing disk changes After creating a new Red Hat OpenShift Container Platform cluster on VMware vSphere, the cluster fails because there is an error when processing disk changes. 1.11.13.7.2. Identifying the problem: Adding the VMware vSphere managed cluster fails due to an error processing disk changes A message similar to the following is displayed in the logs: 1.11.13.7.3. Resolving the problem: Adding the VMware vSphere managed cluster fails due to an error processing disk changes Use the VMware vSphere client to give the user All privileges for Profile-driven Storage Privileges . 1.11.14. Troubleshooting cluster in console with pending or failed status If you observe Pending status or Failed status in the console for a cluster you created, follow the procedure to troubleshoot the problem. 1.11.14.1. Symptom: Cluster in console with pending or failed status After creating a new cluster by using the console, the cluster does not progress beyond the status of Pending or displays Failed status. 1.11.14.2. Identifying the problem: Cluster in console with pending or failed status If the cluster displays Failed status, navigate to the details page for the cluster and follow the link to the logs provided. If no logs are found or the cluster displays Pending status, continue with the following procedure to check for logs: Procedure 1 Run the following command on the hub cluster to view the names of the Kubernetes pods that were created in the namespace for the new cluster: Replace new_cluster_name with the name of the cluster that you created. If no pod that contains the string provision in the name is listed, continue with Procedure 2. If there is a pod with provision in the title, run the following command on the hub cluster to view the logs of that pod: Replace new_cluster_name_provision_pod_name with the name of the cluster that you created, followed by the pod name that contains provision . Search for errors in the logs that might explain the cause of the problem. Procedure 2 If there is not a pod with provision in its name, the problem occurred earlier in the process. Complete the following procedure to view the logs: Run the following command on the hub cluster: Replace new_cluster_name with the name of the cluster that you created. For more information about cluster installation logs, see Gathering installation logs in the Red Hat OpenShift documentation. See if there is additional information about the problem in the Status.Conditions.Message and Status.Conditions.Reason entries of the resource. 1.11.14.3. Resolving the problem: Cluster in console with pending or failed status After you identify the errors in the logs, determine how to resolve the errors before you destroy the cluster and create it again. The following example provides a possible log error of selecting an unsupported zone, and the actions that are required to resolve it: When you created your cluster, you selected one or more zones within a region that are not supported. Complete one of the following actions when you recreate your cluster to resolve the issue: Select a different zone within the region. Omit the zone that does not provide the support, if you have other zones listed. Select a different region for your cluster. After determining the issues from the log, destroy the cluster and recreate it. See Creating clusters for more information about creating a cluster. 1.11.15. Troubleshooting Klusterlet with degraded conditions The Klusterlet degraded conditions can help to diagnose the status of Klusterlet agents on managed cluster. If a Klusterlet is in the degraded condition, the Klusterlet agents on managed cluster might have errors that need to be troubleshooted. See the following information for Klusterlet degraded conditions that are set to True . 1.11.15.1. Symptom: Klusterlet is in the degraded condition After deploying a Klusterlet on managed cluster, the KlusterletRegistrationDegraded or KlusterletWorkDegraded condition displays a status of True . 1.11.15.2. Identifying the problem: Klusterlet is in the degraded condition Run the following command on the managed cluster to view the Klusterlet status: Check KlusterletRegistrationDegraded or KlusterletWorkDegraded to see if the condition is set to True . Proceed to Resolving the problem for any degraded conditions that are listed. 1.11.15.3. Resolving the problem: Klusterlet is in the degraded condition See the following list of degraded statuses and how you can attempt to resolve those issues: If the KlusterletRegistrationDegraded condition with a status of True and the condition reason is: BootStrapSecretMissing , you need create a bootstrap secret on open-cluster-management-agent namespace. If the KlusterletRegistrationDegraded condition displays True and the condition reason is a BootstrapSecretError , or BootstrapSecretUnauthorized , then the current bootstrap secret is invalid. Delete the current bootstrap secret and recreate a valid bootstrap secret on open-cluster-management-agent namespace. If the KlusterletRegistrationDegraded and KlusterletWorkDegraded displays True and the condition reason is HubKubeConfigSecretMissing , delete the Klusterlet and recreate it. If the KlusterletRegistrationDegraded and KlusterletWorkDegraded displays True and the condition reason is: ClusterNameMissing , KubeConfigMissing , HubConfigSecretError , or HubConfigSecretUnauthorized , delete the hub cluster kubeconfig secret from open-cluster-management-agent namespace. The registration agent will bootstrap again to get a new hub cluster kubeconfig secret. If the KlusterletRegistrationDegraded displays True and the condition reason is GetRegistrationDeploymentFailed or UnavailableRegistrationPod , you can check the condition message to get the problem details and attempt to resolve. If the KlusterletWorkDegraded displays True and the condition reason is GetWorkDeploymentFailed or UnavailableWorkPod , you can check the condition message to get the problem details and attempt to resolve. 1.11.16. Namespace remains after deleting a cluster When you remove a managed cluster, the namespace is normally removed as part of the cluster removal process. In rare cases, the namespace remains with some artifacts in it. In that case, you must manually remove the namespace. 1.11.16.1. Symptom: Namespace remains after deleting a cluster After removing a managed cluster, the namespace is not removed. 1.11.16.2. Resolving the problem: Namespace remains after deleting a cluster Complete the following steps to remove the namespace manually: Run the following command to produce a list of the resources that remain in the <cluster_name> namespace: Replace cluster_name with the name of the namespace for the cluster that you attempted to remove. Delete each identified resource on the list that does not have a status of Delete by entering the following command to edit the list: Replace resource_kind with the kind of the resource. Replace resource_name with the name of the resource. Replace namespace with the name of the namespace of the resource. Locate the finalizer attribute in the in the metadata. Delete the non-Kubernetes finalizers by using the vi editor dd command. Save the list and exit the vi editor by entering the :wq command. Delete the namespace by entering the following command: Replace cluster-name with the name of the namespace that you are trying to delete. 1.11.17. Auto-import-secret-exists error when importing a cluster Your cluster import fails with an error message that reads: auto import secret exists. 1.11.17.1. Symptom: Auto import secret exists error when importing a cluster When importing a hive cluster for management, an auto-import-secret already exists error is displayed. 1.11.17.2. Resolving the problem: Auto-import-secret-exists error when importing a cluster This problem occurs when you attempt to import a cluster that was previously managed. When this happens, the secrets conflict when you try to reimport the cluster. To work around this problem, complete the following steps: To manually delete the existing auto-import-secret , run the following command on the hub cluster: Replace cluster-namespace with the namespace of your cluster. Import your cluster again by using the procedure in Cluster import introduction . 1.11.18. Troubleshooting missing PlacementDecision after creating Placement If no PlacementDescision is generated after creating a Placement , follow the procedure to troubleshoot the problem. 1.11.18.1. Symptom: Missing PlacementDecision after creating Placement After creating a Placement , a PlacementDescision is not automatically generated. 1.11.18.2. Resolving the problem: Missing PlacementDecision after creating Placement To resolve the issue, complete the following steps: Check the Placement conditions by running the following command: Replace placement-name with the name of the Placement . The output might resemble the following example: Check the output for the Status of PlacementMisconfigured and PlacementSatisfied : If the PlacementMisconfigured Status is true, your Placement has configuration errors. Check the included message for more details on the configuration errors and how to resolve them. If the PlacementSatisfied Status is false, no managed cluster satisfies your Placement . Check the included message for more details and how to resolve the error. In the example, no ManagedClusterSetBindings were found in the placement namespace. You can check the score of each cluster in Events to find out why some clusters with lower scores are not selected. The output might resemble the following example: Note: The placement controller assigns a score and generates an event for each filtered ManagedCluster . The placement controller genereates a new event when the cluster score changes. 1.11.19. Troubleshooting a discovery failure of bare metal hosts on Dell hardware If the discovery of bare metal hosts fails on Dell hardware, the Integrated Dell Remote Access Controller (iDRAC) is likely configured to not allow certificates from unknown certificate authorities. 1.11.19.1. Symptom: Discovery failure of bare metal hosts on Dell hardware After you complete the procedure for discovering bare metal hosts by using the baseboard management controller, an error message similar to the following is displayed: 1.11.19.2. Resolving the problem: Discovery failure of bare metal hosts on Dell hardware The iDRAC is configured not to accept certificates from unknown certificate authorities. To bypass the problem, disable the certificate verification on the baseboard management controller of the host iDRAC by completing the following steps: In the iDRAC console, navigate to Configuration > Virtual media > Remote file share . Change the value of Expired or invalid certificate action to Yes . 1.11.20. Troubleshooting Minimal ISO boot failures You might encounter issues when trying to boot a minimal ISO. 1.11.20.1. Symptom: Minimal ISO boot failures The boot screen shows that the host has failed to download the root file system image. 1.11.20.2. Resolving the problem: Minimal ISO boot failures See Troubleshooting minimal ISO boot failures in the Assisted Installer for OpenShift Container Platform} documentation to learn how to troubleshoot the issue. 1.11.21. Troubleshooting the Red Hat Enterprise Linux CoreOS image mirroring For hosted control planes on Red Hat OpenShift Virtualization in a disconnected environment, oc-mirror fails to automatically mirror the Red Hat Enterprise Linux CoreOS image to the internal registry. When you create your first hosted cluster, the Kubevirt virtual machine does not boot, because the boot image is not availble in the internal registry. 1.11.21.1. Symptom: oc-mirror fails to attempt image mirroring The oc-mirror plugin does not mirror the {op-system-first} image from the release payload to the internal registry. 1.11.21.2. Resolving the problem: oc-mirror fails to attempt the image mirroring To resolve this issue, manually mirror the Red Hat Enterprise Linux CoreOS image to the internal registry. Complete the following steps: Get the internal registry name by running the following command: oc get imagecontentsourcepolicy -o json | jq -r '.items[].spec.repositoryDigestMirrors[0].mirrors[0]' Get a payload image by running the following command: oc get clusterversion version -ojsonpath='{.status.desired.image}' Extract the 0000_50_installer_coreos-bootimages.yaml file that contains boot images from your payload image on the hosted cluster. Replace <payload_image> with the name of your payload image. Run the following command: oc image extract --file /release-manifests/0000_50_installer_coreos-bootimages.yaml <payload_image> --confirm Get the image by running the following command: cat 0000_50_installer_coreos-bootimages.yaml | yq -r .data.stream | jq -r '.architectures.x86_64.images.kubevirt."digest-ref"' Mirror the Red Hat Enterprise Linux CoreOS image to your internal registry. Replace <rhcos_image> with your Red Hat Enterprise Linux CoreOS image for example, quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d9643ead36b1c026be664c9c65c11433c6cdf71bfd93ba229141d134a4a6dd94 . Replace <internal_registry> with the name of your internal registry, for example, virthost.ostest.test.metalkube.org:5000/localimages/ocp-v4.0-art-dev . Run the following command: oc image mirror <rhcos_image> <internal_registry> Create a YAML file named rhcos-boot-kubevirt.yaml that defines the ImageDigestMirrorSet object. See the following example configuration: apiVersion: config.openshift.io/v1 kind: ImageDigestMirrorSet metadata: name: rhcos-boot-kubevirt spec: repositoryDigestMirrors: - mirrors: - <rhcos_image_no_digest> 1 source: virthost.ostest.test.metalkube.org:5000/localimages/ocp-v4.0-art-dev 2 1 Specify your Red Hat Enterprise Linux CoreOS image without its digest, for example, quay.io/openshift-release-dev/ocp-v4.0-art-dev . 2 Specify the name of your internal registry, for example, virthost.ostest.test.metalkube.org:5000/localimages/ocp-v4.0-art-dev . Apply the rhcos-boot-kubevirt.yaml file to create the ImageDigestMirrorSet object by running the following command: oc apply -f rhcos-boot-kubevirt.yaml 1.11.22. Troubleshooting: Returning non bare metal clusters to the late binding pool If you are using late binding managed clusters without BareMetalHosts , you must complete additional manual steps to destroy a late binding cluster and return the nodes back to the Discovery ISO. 1.11.22.1. Symptom: Returning non bare metal clusters to the late binding pool For late binding managed clusters without BareMetalHosts , removing cluster information does not automatically return all nodes to the Discovery ISO. 1.11.22.2. Resolving the problem: Returning non bare metal clusters to the late binding pool To unbind the non bare metal nodes with late binding, complete the following steps: Remove the cluster information. See Removing a cluster from management to learn more. Clean the root disks. Reboot manually with the Discovery ISO. 1.11.23. Troubleshooting managed clusters Unknown on OpenShift Service on AWS with hosted control planes cluster The status of all managed clusters on a OpenShift Service on AWS hosted clusters suddenly becomes Unknown . 1.11.23.1. Symptom: All managed clusters are in Unknown status on OpenShift Service on AWS with hosted control planes cluster When you check the klusterlet-agent pod log in the open-cluster-management-agent namespace on your managed cluster, you see an error that resembles the following: E0809 18:45:29.450874 1 reflector.go:147] k8s.io/[email protected]/tools/cache/reflector.go:229: Failed to watch *v1.CertificateSigningRequest: failed to list *v1.CertificateSigningRequest: Get "https://api.xxx.openshiftapps.com:443/apis/certificates.k8s.io/v1/certificatesigningrequests?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate signed by unknown authority 1.11.23.2. Resolving the problem: All managed clusters are in Unknown status on OpenShift Service on AWS with hosted control planes cluster Create a KlusterletConfig resource with name global if it does not exist. Set the spec.hubKubeAPIServerConfig.serverVerificationStrategy to UseSystemTruststore . See the following example: apiVersion: config.open-cluster-management.io/v1alpha1 kind: KlusterletConfig metadata: name: global spec: hubKubeAPIServerConfig: serverVerificationStrategy: UseSystemTruststore Apply the resource by running the following command on the hub cluster. Replace <filename> with the name of your file: oc apply -f <filename> The state of some managed clusters might recover. Continue with the process for managed clusters that remain in the Unknown status. Export and decode the import.yaml `file from the hub cluster by running the following command on the hub cluster. Replace `<cluster_name> with the name of your cluster. oc get secret <cluster_name>-import -n <cluster_name> -o jsonpath={.data.import\.yaml} | base64 --decode > <cluster_name>-import.yaml Apply the file by running the following command on the managed cluster. oc apply -f <cluster_name>-import.yaml 1.11.24. Troubleshooting an attempt to upgrade managed cluster with missing OpenShift Container Platform version You do not see the OpenShift Container Platform version that you want in the console when you attempt to upgrade your managed cluster in the console. 1.11.24.1. Symptom: Attempt to upgrade managed cluster with missing OpenShift Container Platform version When you attempt to upgrade a managed cluster from the console and click Upgrade available in the Cluster details view to choose the OpenShift Container Platform version from the dropdown list, the version is missing. 1.11.24.2. Resolving the problem: Attempt to upgrade managed cluster with missing OpenShift Container Platform version See the following procedure: Ensure the version you want is included in the status of the ClusterVersion resource on the managed cluster. Run the following command: oc get clusterversion version -o jsonpath='{.status.availableUpdates[*].version}' If your expected version is not displayed, then the version is not applicable for this managed cluster. Check if the ManagedClusterInfo resource includes the version on the hub cluster. Run the following command: oc -n <cluster_name> get managedclusterinfo <cluster_name> -o jsonpath='{.status.distributionInfo.ocp.availableUpdates[*]}' If the version is included, check to see if there is a ClusterCurator resource with a failure on the hub cluster. Run the following command: oc -n <cluster_name> get ClusterCurator <cluster_name> -o yaml If the ClusterCurator resource exists and the status of its clustercurator-job condition is False , delete the ClusterCurator resource from the hub cluster. Run the following command: oc -n <cluster_name> delete ClusterCurator <cluster_name> If the ManagedClusterInfo resource does not include the version, check the work-manager add-on log on the managed cluster and fix errors that are reported. Run the following command and replace the pod name with the real name in your environment: oc -n open-cluster-management-agent-addon logs klusterlet-addon-workmgr-<your_pod_name>
[ "create clusterrolebinding <role-binding-name> --clusterrole=open-cluster-management:cluster-manager-admin --user=<username>", "create clusterrolebinding (role-binding-name) --clusterrole=open-cluster-management:admin:<cluster-name> --user=<username>", "create rolebinding <role-binding-name> -n <cluster-name> --clusterrole=admin --user=<username>", "create clusterrolebinding <role-binding-name> --clusterrole=open-cluster-management:view:<cluster-name> --user=<username>", "create rolebinding <role-binding-name> -n <cluster-name> --clusterrole=view --user=<username>", "get managedclusters.clusterview.open-cluster-management.io", "get managedclustersets.clusterview.open-cluster-management.io", "adm policy add-cluster-role-to-group open-cluster-management:clusterset-admin:server-foundation-clusterset server-foundation-team-admin", "adm policy add-cluster-role-to-group open-cluster-management:clusterset-view:server-foundation-clusterset server-foundation-team-user", "adm new-project server-foundation-clusterpool adm policy add-role-to-group admin server-foundation-team-admin --namespace server-foundation-clusterpool", "E0809 18:45:29.450874 1 reflector.go:147] k8s.io/[email protected]/tools/cache/reflector.go:229: Failed to watch *v1.CertificateSigningRequest: failed to list *v1.CertificateSigningRequest: Get \"https://api.xxx.openshiftapps.com:443/apis/certificates.k8s.io/v1/certificatesigningrequests?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate signed by unknown authority", "apiVersion: config.open-cluster-management.io/v1alpha1 kind: KlusterletConfig metadata: name: global spec: hubKubeAPIServerConfig: serverVerificationStrategy: UseSystemTruststore", "apply -f <filename>", "get secret local-cluster-import -n local-cluster -o jsonpath={.data.import\\.yaml} | base64 --decode > import.yaml", "apply -f import.yaml", "E0203 07:10:38.266841 1 reflector.go:138] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:224: Failed to watch *v1.ClusterClaim: failed to list *v1.ClusterClaim: v1.ClusterClaimList.Items: []v1.ClusterClaim: v1.ClusterClaim.v1.ClusterClaim.Spec: v1.ClusterClaimSpec.Lifetime: unmarshalerDecoder: time: unknown unit \"w\" in duration \"1w\", error found in #10 byte of ...|time\":\"1w\"}},{\"apiVe|..., bigger context ...|clusterPoolName\":\"policy-aas-hubs\",\"lifetime\":\"1w\"}},{\"apiVersion\":\"hive.openshift.io/v1\",\"kind\":\"Cl|", "edit clusterdeployment/<mycluster> -n <namespace>", "delete ns <namespace>", "status: agentLabelSelector: matchLabels: infraenvs.agent-install.openshift.io: qe2 bootArtifacts: initrd: https://assisted-image-service-multicluster-engine.redhat.com/images/0000/pxe-initrd?api_key=0000000&arch=x86_64&version=4.11 ipxeScript: https://assisted-service-multicluster-engine.redhat.com/api/assisted-install/v2/infra-envs/00000/downloads/files?api_key=000000000&file_name=ipxe-script kernel: https://mirror.openshift.com/pub/openshift-v4/x86_64/dependencies/rhcos/4.12/latest/rhcos-live-kernel-x86_64 rootfs: https://mirror.openshift.com/pub/openshift-v4/x86_64/dependencies/rhcos/4.12/latest/rhcos-live-rootfs.x86_64.img", "for artifact in oc get infraenv qe2 -ojsonpath=\"{.status.bootArtifacts}\" | jq \". | keys[]\" | sed \"s/\\\"//g\" do curl -k oc get infraenv qe2 -ojsonpath=\"{.status.bootArtifacts.USD{artifact}}\"` -o USDartifact", "patch clusterdeployment <clusterdeployment-name> -p '{\"metadata\":{\"finalizers\":null}}' --type=merge", "apiVersion: v1 kind: ConfigMap metadata: name: my-assisted-service-config namespace: multicluster-engine data: ALLOW_CONVERGED_FLOW: \"false\" 1", "annotate --overwrite AgentServiceConfig agent unsupported.agent-install.openshift.io/assisted-service-configmap=my-assisted-service-config", "progressing... mca and work configs mismatch", "status: conditions: - lastTransitionTime: \"2024-09-09T16:08:42Z\" message: progressing... mca and work configs mismatch reason: Progressing status: \"True\" type: Progressing configReferences: - desiredConfig: name: deploy-config namespace: open-cluster-management-hub specHash: b81380f1f1a1920388d90859a5d51f5521cecd77752755ba05ece495f551ebd0 group: addon.open-cluster-management.io lastObservedGeneration: 1 name: deploy-config namespace: open-cluster-management-hub resource: addondeploymentconfigs - desiredConfig: name: cluster-proxy specHash: \"\" group: proxy.open-cluster-management.io lastObservedGeneration: 1 name: cluster-proxy resource: managedproxyconfigurations", "-n <cluster-name> delete managedclusteraddon <addon-name>", "get bmh -n <cluster_provisioning_namespace>", "describe bmh -n <cluster_provisioning_namespace> <bmh_name>", "Status: Error Count: 1 Error Message: Image provisioning failed: ... [Errno 36] File name too long", "apiVersion: agent-install.openshift.io/v1beta1 kind: InfraEnv spec: osImageVersion: 4.15", "-n openshift-console get route console", "console console-openshift-console.apps.new-coral.purple-chesterfield.com console https reencrypt/Redirect None", "apiVersion: multicluster.openshift.io/v1 kind: MultiClusterEngine metadata: name: multiclusterengine spec: {}", "create namespace <namespace>", "project <namespace>", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <default> namespace: <namespace> spec: targetNamespaces: - <namespace>", "apply -f <path-to-file>/<operator-group>.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: multicluster-engine spec: sourceNamespace: openshift-marketplace source: redhat-operators channel: stable-2.7 installPlanApproval: Automatic name: multicluster-engine", "apply -f <path-to-file>/<subscription>.yaml", "apiVersion: multicluster.openshift.io/v1 kind: MultiClusterEngine metadata: name: multiclusterengine spec: {}", "apply -f <path-to-file>/<custom-resource>.yaml", "error: unable to recognize \"./mce.yaml\": no matches for kind \"MultiClusterEngine\" in version \"operator.multicluster-engine.io/v1\"", "get mce -o=jsonpath='{.items[0].status.phase}'", "metadata: labels: node-role.kubernetes.io/infra: \"\" spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/infra", "spec: config: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra effect: NoSchedule operator: Exists", "nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra effect: NoSchedule operator: Exists", "spec: nodeSelector: node-role.kubernetes.io/infra: \"\"", "-n openshift-console get route console", "console console-openshift-console.apps.new-name.purple-name.com console https reencrypt/Redirect None", "additionalTrustBundle: | -----BEGIN CERTIFICATE----- certificate_content -----END CERTIFICATE----- sshKey: >-", "- mirrors: - <your_registry>/rhacm2 source: registry.redhat.io/rhacm2 - mirrors: - <your_registry>/quay source: registry.redhat.io/quay - mirrors: - <your_registry>/compliance source: registry.redhat.io/compliance", "apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: mce-repo spec: repositoryDigestMirrors: - mirrors: - mirror.registry.com:5000/multicluster-engine source: registry.redhat.io/multicluster-engine", "apply -f mce-policy.yaml", "UPSTREAM_REGISTRY=quay.io PRODUCT_REPO=openshift-release-dev RELEASE_NAME=ocp-release OCP_RELEASE=4.15.2-x86_64 LOCAL_REGISTRY=USD(hostname):5000 LOCAL_SECRET_JSON=<pull-secret> adm -a USD{LOCAL_SECRET_JSON} release mirror --from=USD{UPSTREAM_REGISTRY}/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE} --to=USD{LOCAL_REGISTRY}/ocp4 --to-release-image=USD{LOCAL_REGISTRY}/ocp4/release:USD{OCP_RELEASE}", "git clone https://github.com/openshift/cincinnati-graph-data", "build -f <docker-path> -t <graph-path>:latest", "push <graph-path>:latest --authfile=<pull-secret>.json", "apiVersion: v1 kind: ConfigMap metadata: name: trusted-ca data: updateservice-registry: | -----BEGIN CERTIFICATE----- -----END CERTIFICATE-----", "patch image.config.openshift.io cluster -p '{\"spec\":{\"additionalTrustedCA\":{\"name\":\"<trusted_ca>\"}}}' --type merge", "apiVersion: update-service.openshift.io/v1beta2 kind: update-service metadata: name: openshift-cincinnati-instance namespace: openshift-update-service spec: registry: <registry-host-name>:<port> 1 replicas: 1 repository: USD{LOCAL_REGISTRY}/ocp4/release graphDataImage: '<host-name>:<port>/cincinnati-graph-data-container' 2", "apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: policy-mirror namespace: default spec: disabled: false remediationAction: enforce policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: policy-image-content-source-policy spec: object-templates: - complianceType: musthave objectDefinition: apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: <your-local-mirror-name> 1 spec: repositoryDigestMirrors: - mirrors: - <your-registry> 2 source: registry.redhat.io --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: binding-policy-mirror namespace: default placementRef: name: placement-policy-mirror kind: PlacementRule apiGroup: apps.open-cluster-management.io subjects: - name: policy-mirror kind: Policy apiGroup: policy.open-cluster-management.io --- apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: placement-policy-mirror namespace: default spec: clusterSelector: matchExpressions: [] 3", "apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: policy-catalog namespace: default spec: disabled: false remediationAction: enforce policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: policy-catalog spec: object-templates: - complianceType: musthave objectDefinition: apiVersion: config.openshift.io/v1 kind: OperatorHub metadata: name: cluster spec: disableAllDefaultSources: true - complianceType: musthave objectDefinition: apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog namespace: openshift-marketplace spec: sourceType: grpc image: '<registry_host_name>:<port>/olm/redhat-operators:v1' 1 displayName: My Operator Catalog publisher: grpc --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: binding-policy-catalog namespace: default placementRef: name: placement-policy-catalog kind: PlacementRule apiGroup: apps.open-cluster-management.io subjects: - name: policy-catalog kind: Policy apiGroup: policy.open-cluster-management.io --- apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: placement-policy-catalog namespace: default spec: clusterSelector: matchExpressions: [] 2", "get clusterversion -o yaml", "get routes", "apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: policy-cluster-version namespace: default annotations: policy.open-cluster-management.io/standards: null policy.open-cluster-management.io/categories: null policy.open-cluster-management.io/controls: null spec: disabled: false remediationAction: enforce policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: policy-cluster-version spec: object-templates: - complianceType: musthave objectDefinition: apiVersion: config.openshift.io/v1 kind: ClusterVersion metadata: name: version spec: channel: stable-4.4 upstream: >- https://example-cincinnati-policy-engine-uri/api/upgrades_info/v1/graph 1 --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: binding-policy-cluster-version namespace: default placementRef: name: placement-policy-cluster-version kind: PlacementRule apiGroup: apps.open-cluster-management.io subjects: - name: policy-cluster-version kind: Policy apiGroup: policy.open-cluster-management.io --- apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: placement-policy-cluster-version namespace: default spec: clusterSelector: matchExpressions: [] 2", "get clusterversion -o yaml", "apiVersion: v1 items: - apiVersion: config.openshift.io/v1 kind: ClusterVersion [..] spec: channel: stable-4.4 upstream: https://<hub-cincinnati-uri>/api/upgrades_info/v1/graph", "apiVersion: operator.open-cluster-management.io/v1 kind: MultiClusterEngine metadata: name: multiclusterengine spec: overrides: components: - name: <name> 1 enabled: true", "patch MultiClusterEngine <multiclusterengine-name> --type=json -p='[{\"op\": \"add\", \"path\": \"/spec/overrides/components/-\",\"value\":{\"name\":\"<name>\",\"enabled\":true}}]'", "apiVersion: multicluster.openshift.io/v1 kind: MultiClusterEngine metadata: name: multiclusterengine spec: overrides: components: - name: local-cluster enabled: false", "create secret generic <secret> -n <namespace> --from-file=.dockerconfigjson=<path-to-pull-secret> --type=kubernetes.io/dockerconfigjson", "apiVersion: multicluster.openshift.io/v1 kind: MultiClusterEngine metadata: name: multiclusterengine spec: imagePullSecret: <secret>", "apiVersion: multicluster.openshift.io/v1 kind: MultiClusterEngine metadata: name: multiclusterengine spec: targetNamespace: <target>", "apiVersion: multicluster.openshift.io/v1 kind: MultiClusterEngine metadata: name: multiclusterengine spec: availabilityConfig: \"Basic\"", "spec: nodeSelector: node-role.kubernetes.io/infra: \"\"", "spec: tolerations: - key: node-role.kubernetes.io/infra effect: NoSchedule operator: Exists", "apiVersion: multicluster.openshift.io/v1 kind: MultiClusterEngine metadata: name: multiclusterengine spec: overrides: components: - name: managedserviceaccount enabled: true", "patch MultiClusterEngine <multiclusterengine-name> --type=json -p='[{\"op\": \"add\", \"path\": \"/spec/overrides/components/-\",\"value\":{\"name\":\"managedserviceaccount\",\"enabled\":true}}]'", "Cannot delete MultiClusterEngine resource because ManagedCluster resource(s) exist", "project <namespace>", "delete multiclusterengine --all", "get multiclusterengine -o yaml", "❯ oc get csv NAME DISPLAY VERSION REPLACES PHASE multicluster-engine.v2.0.0 multicluster engine for Kubernetes 2.0.0 Succeeded ❯ oc delete clusterserviceversion multicluster-engine.v2.0.0 ❯ oc delete sub multicluster-engine", "#!/bin/bash delete apiservice v1.admission.cluster.open-cluster-management.io v1.admission.work.open-cluster-management.io delete validatingwebhookconfiguration multiclusterengines.multicluster.openshift.io delete mce --all", "kind: Secret metadata: name: <managed-cluster-name>-aws-creds namespace: <managed-cluster-namespace> type: Opaque data: aws_access_key_id: USD(echo -n \"USD{AWS_KEY}\" | base64 -w0) aws_secret_access_key: USD(echo -n \"USD{AWS_SECRET}\" | base64 -w0)", "label secret hypershift-operator-oidc-provider-s3-credentials -n local-cluster \"cluster.open-cluster-management.io/type=awss3\" label secret hypershift-operator-oidc-provider-s3-credentials -n local-cluster \"cluster.open-cluster-management.io/credentials=credentials=\"", "az ad sp create-for-rbac --role Contributor --name <service_principal> --scopes <subscription_path>", "az ad sp create-for-rbac --role Contributor --name <service_principal> --scopes <subscription_path>", "az account show", "az account show", "kind: Secret metadata: name: <managed-cluster-name>-azure-creds namespace: <managed-cluster-namespace> type: Opaque data: baseDomainResourceGroupName: USD(echo -n \"USD{azure_resource_group_name}\" | base64 -w0) osServicePrincipal.json: USD(base64 -w0 \"USD{AZURE_CRED_JSON}\")", "kind: Secret metadata: name: <managed-cluster-name>-gcp-creds namespace: <managed-cluster-namespace> type: Opaque data: osServiceAccount.json: USD(base64 -w0 \"USD{GCP_CRED_JSON}\")", "- mirrors: - registry.example.com:5000/ocp4 source: quay.io/openshift-release-dev/ocp-release-nightly - mirrors: - registry.example.com:5000/ocp4 source: quay.io/openshift-release-dev/ocp-release - mirrors: - registry.example.com:5000/ocp4 source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "- mirrors: - registry.example.com:5000/rhacm2 source: registry.redhat.io/rhacm2", "kind: Secret metadata: name: <managed-cluster-name>-vsphere-creds namespace: <managed-cluster-namespace> type: Opaque data: username: USD(echo -n \"USD{VMW_USERNAME}\" | base64 -w0) password.json: USD(base64 -w0 \"USD{VMW_PASSWORD}\")", "- mirrors: - registry.example.com:5000/ocp4 source: quay.io/openshift-release-dev/ocp-release-nightly - mirrors: - registry.example.com:5000/ocp4 source: quay.io/openshift-release-dev/ocp-release - mirrors: - registry.example.com:5000/ocp4 source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "- mirrors: - registry.example.com:5000/rhacm2 source: registry.redhat.io/rhacm2", "kind: Secret metadata: name: <managed-cluster-name>-osp-creds namespace: <managed-cluster-namespace> type: Opaque data: clouds.yaml: USD(base64 -w0 \"USD{OSP_CRED_YAML}\") cloud: USD(echo -n \"openstack\" | base64 -w0)", "apiVersion: hive.openshift.io/v1 kind: ClusterImageSet metadata: labels: channel: fast visible: 'true' name: img4.x.1-x86-64-appsub spec: releaseImage: quay.io/openshift-release-dev/ocp-release:4.x.1-x86_64", "get clusterimageset", "quay.io/openshift-release-dev/ocp-release:4.6.8-x86_64", "apiVersion: hive.openshift.io/v1 kind: ClusterImageSet metadata: labels: channel: fast visible: 'true' name: img4.x.0-multi-appsub spec: releaseImage: quay.io/openshift-release-dev/ocp-release:4.x.0-multi", "pull quay.io/openshift-release-dev/ocp-release:4.x.1-x86_64 pull quay.io/openshift-release-dev/ocp-release:4.x.1-ppc64le pull quay.io/openshift-release-dev/ocp-release:4.x.1-s390x pull quay.io/openshift-release-dev/ocp-release:4.x.1-aarch64", "login <private-repo>", "push quay.io/openshift-release-dev/ocp-release:4.x.1-x86_64 <private-repo>/ocp-release:4.x.1-x86_64 push quay.io/openshift-release-dev/ocp-release:4.x.1-ppc64le <private-repo>/ocp-release:4.x.1-ppc64le push quay.io/openshift-release-dev/ocp-release:4.x.1-s390x <private-repo>/ocp-release:4.x.1-s390x push quay.io/openshift-release-dev/ocp-release:4.x.1-aarch64 <private-repo>/ocp-release:4.x.1-aarch64", "manifest create mymanifest", "manifest add mymanifest <private-repo>/ocp-release:4.x.1-x86_64 manifest add mymanifest <private-repo>/ocp-release:4.x.1-ppc64le manifest add mymanifest <private-repo>/ocp-release:4.x.1-s390x manifest add mymanifest <private-repo>/ocp-release:4.x.1-aarch64", "manifest push mymanifest docker://<private-repo>/ocp-release:4.x.1", "apiVersion: hive.openshift.io/v1 kind: ClusterImageSet metadata: labels: channel: fast visible: \"true\" name: img4.x.1-appsub spec: releaseImage: <private-repo>/ocp-release:4.x.1", "apply -f <file-name>.yaml", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-image-set-git-repo namespace: multicluster-engine data: gitRepoUrl: <forked acm-hive-openshift-releases repository URL> gitRepoBranch: backplane-<2.x> gitRepoPath: clusterImageSets channel: <fast or stable>", "get clusterImageSets delete clusterImageSet <clusterImageSet_NAME>", "apiVersion: hive.openshift.io/v1 kind: ClusterImageSet metadata: labels: channel: fast name: img<4.x.x>-x86-64-appsub spec: releaseImage: IMAGE_REGISTRY_IPADDRESS_or__DNSNAME/REPO_PATH/ocp-release@sha256:073a4e46289be25e2a05f5264c8f1d697410db66b960c9ceeddebd1c61e58717", "adm release info <tagged_openshift_release_image> | grep \"Pull From\"", "Pull From: quay.io/openshift-release-dev/ocp-release@sha256:69d1292f64a2b67227c5592c1a7d499c7d00376e498634ff8e1946bc9ccdddfe", "create -f <clusterImageSet_FILE>", "create -f img4.11.9-x86_64.yaml", "delete -f subscribe/subscription-fast", "make subscribe-candidate", "make subscribe-fast", "make subscribe-stable", "delete -f subscribe/subscription-fast", "git clone https://github.com/stolostron/acm-hive-openshift-releases.git cd acm-hive-openshift-releases git checkout origin/backplane-<2.x>", "find clusterImageSets/fast -type d -exec oc apply -f {} \\; 2> /dev/null", "apiVersion: hive.openshift.io/v1 kind: ClusterImageSet metadata: labels: channel: stable visible: 'true' name: img4.x.47-x86-64-appsub spec: releaseImage: quay.io/openshift-release-dev/ocp-release:4.x.47-x86_64", "kind: ConfigMap apiVersion: v1 metadata: name: <my-baremetal-cluster-install-manifests> namespace: <mynamespace> data: 99_metal3-config.yaml: | kind: ConfigMap apiVersion: v1 metadata: name: metal3-config namespace: openshift-machine-api data: http_port: \"6180\" provisioning_interface: \"enp1s0\" provisioning_ip: \"172.00.0.3/24\" dhcp_range: \"172.00.0.10,172.00.0.100\" deploy_kernel_url: \"http://172.00.0.3:6180/images/ironic-python-agent.kernel\" deploy_ramdisk_url: \"http://172.00.0.3:6180/images/ironic-python-agent.initramfs\" ironic_endpoint: \"http://172.00.0.3:6385/v1/\" ironic_inspector_endpoint: \"http://172.00.0.3:5150/v1/\" cache_url: \"http://192.168.111.1/images\" rhcos_image_url: \"https://releases-art-rhcos.svc.ci.openshift.org/art/storage/releases/rhcos-4.3/43.81.201911192044.0/x86_64/rhcos-43.81.201911192044.0-openstack.x86_64.qcow2.gz\"", "apply -f <filename>.yaml", "apiVersion: hive.openshift.io/v1 kind: ClusterDeployment metadata: name: <my-baremetal-cluster> namespace: <mynamespace> annotations: hive.openshift.io/try-install-once: \"true\" spec: baseDomain: test.example.com clusterName: <my-baremetal-cluster> controlPlaneConfig: servingCertificates: {} platform: baremetal: libvirtSSHPrivateKeySecretRef: name: provisioning-host-ssh-private-key provisioning: installConfigSecretRef: name: <my-baremetal-cluster-install-config> sshPrivateKeySecretRef: name: <my-baremetal-hosts-ssh-private-key> manifestsConfigMapRef: name: <my-baremetal-cluster-install-manifests> imageSetRef: name: <my-clusterimageset> sshKnownHosts: - \"10.1.8.90 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXvVVVKUYVkuyvkuygkuyTCYTytfkufTYAAAAIbmlzdHAyNTYAAABBBKWjJRzeUVuZs4yxSy4eu45xiANFIIbwE3e1aPzGD58x/NX7Yf+S8eFKq4RrsfSaK2hVJyJjvVIhUsU9z2sBJP8=\" pullSecretRef: name: <my-baremetal-cluster-pull-secret>", "apply -f <filename>.yaml", "vpc-1 (us-gov-east-1) : 10.0.0.0/20 subnet-11 (us-gov-east-1a): 10.0.0.0/23 subnet-12 (us-gov-east-1b): 10.0.2.0/23 subnet-13 (us-gov-east-1c): 10.0.4.0/23 subnet-12 (us-gov-east-1d): 10.0.8.0/23 subnet-12 (us-gov-east-1e): 10.0.10.0/23 subnet-12 (us-gov-east-1f): 10.0.12.0/2", "vpc-2 (us-gov-east-1) : 10.0.16.0/20 subnet-21 (us-gov-east-1a): 10.0.16.0/23 subnet-22 (us-gov-east-1b): 10.0.18.0/23 subnet-23 (us-gov-east-1c): 10.0.20.0/23 subnet-24 (us-gov-east-1d): 10.0.22.0/23 subnet-25 (us-gov-east-1e): 10.0.24.0/23 subnet-26 (us-gov-east-1f): 10.0.28.0/23", "ec2:CreateVpcEndpointServiceConfiguration ec2:DescribeVpcEndpointServiceConfigurations ec2:ModifyVpcEndpointServiceConfiguration ec2:DescribeVpcEndpointServicePermissions ec2:ModifyVpcEndpointServicePermissions ec2:DeleteVpcEndpointServiceConfigurations", "ec2:DescribeVpcEndpointServices ec2:DescribeVpcEndpoints ec2:CreateVpcEndpoint ec2:CreateTags ec2:DescribeNetworkInterfaces ec2:DescribeVPCs ec2:DeleteVpcEndpoints route53:CreateHostedZone route53:GetHostedZone route53:ListHostedZonesByVPC route53:AssociateVPCWithHostedZone route53:DisassociateVPCFromHostedZone route53:CreateVPCAssociationAuthorization route53:DeleteVPCAssociationAuthorization route53:ListResourceRecordSets route53:ChangeResourceRecordSets route53:DeleteHostedZone", "route53:AssociateVPCWithHostedZone route53:DisassociateVPCFromHostedZone ec2:DescribeVPCs", "spec: awsPrivateLink: ## The list of inventory of VPCs that can be used to create VPC ## endpoints by the controller. endpointVPCInventory: - region: us-east-1 vpcID: vpc-1 subnets: - availabilityZone: us-east-1a subnetID: subnet-11 - availabilityZone: us-east-1b subnetID: subnet-12 - availabilityZone: us-east-1c subnetID: subnet-13 - availabilityZone: us-east-1d subnetID: subnet-14 - availabilityZone: us-east-1e subnetID: subnet-15 - availabilityZone: us-east-1f subnetID: subnet-16 - region: us-east-1 vpcID: vpc-2 subnets: - availabilityZone: us-east-1a subnetID: subnet-21 - availabilityZone: us-east-1b subnetID: subnet-22 - availabilityZone: us-east-1c subnetID: subnet-23 - availabilityZone: us-east-1d subnetID: subnet-24 - availabilityZone: us-east-1e subnetID: subnet-25 - availabilityZone: us-east-1f subnetID: subnet-26 ## The credentialsSecretRef references a secret with permissions to create. ## The resources in the account where the inventory of VPCs exist. credentialsSecretRef: name: <hub-account-credentials-secret-name> ## A list of VPC where various mce clusters exists. associatedVPCs: - region: region-mce1 vpcID: vpc-mce1 credentialsSecretRef: name: <credentials-that-have-access-to-account-where-MCE1-VPC-exists> - region: region-mce2 vpcID: vpc-mce2 credentialsSecretRef: name: <credentials-that-have-access-to-account-where-MCE2-VPC-exists>", "api.<cluster_name>.<base_domain>", "*.apps.<cluster_name>.<base_domain>", "api.<cluster_name>.<base_domain>", "*.apps.<cluster_name>.<base_domain>", "apiVersion: v1 kind: Secret type: Opaque metadata: name: ocp3-openstack-trust namespace: ocp3 stringData: ca.crt: | -----BEGIN CERTIFICATE----- <Base64 certificate contents here> -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- <Base64 certificate contents here> -----END CERTIFICATE----", "platform: openstack: certificatesSecretRef: name: ocp3-openstack-trust credentialsSecretRef: name: ocp3-openstack-creds cloud: openstack", "api.<cluster_name>.<base_domain>", "*.apps.<cluster_name>.<base_domain>", "apiVersion: v1 kind: Namespace metadata: name: sample-namespace", "apiVersion: v1 kind: Secret type: kubernetes.io/dockerconfigjson metadata: name: <pull-secret> namespace: sample-namespace stringData: .dockerconfigjson: 'your-pull-secret-json' 1", "apiVersion: hive.openshift.io/v1 kind: ClusterImageSet metadata: name: openshift-v4.15.0 spec: releaseImage: quay.io/openshift-release-dev/ocp-release:4.15.0-rc.0-x86_64", "apiVersion: hive.openshift.io/v1 kind: ClusterDeployment metadata: name: single-node namespace: demo-worker4 spec: baseDomain: hive.example.com clusterInstallRef: group: extensions.hive.openshift.io kind: AgentClusterInstall name: test-agent-cluster-install 1 version: v1beta1 clusterName: test-cluster controlPlaneConfig: servingCertificates: {} platform: agentBareMetal: agentSelector: matchLabels: location: internal pullSecretRef: name: <pull-secret> 2", "apiVersion: extensions.hive.openshift.io/v1beta1 kind: AgentClusterInstall metadata: name: test-agent-cluster-install namespace: demo-worker4 spec: platformType: BareMetal 1 clusterDeploymentRef: name: single-node 2 imageSetRef: name: openshift-v4.15.0 3 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 192.168.111.0/24 serviceNetwork: - 172.30.0.0/16 provisionRequirements: controlPlaneAgents: 1 sshPublicKey: ssh-rsa <your-public-key-here> 4", "apiVersion: agent-install.openshift.io/v1beta1 kind: NMStateConfig metadata: name: <mynmstateconfig> namespace: <demo-worker4> labels: demo-nmstate-label: <value> spec: config: interfaces: - name: eth0 type: ethernet state: up mac-address: 02:00:00:80:12:14 ipv4: enabled: true address: - ip: 192.168.111.30 prefix-length: 24 dhcp: false - name: eth1 type: ethernet state: up mac-address: 02:00:00:80:12:15 ipv4: enabled: true address: - ip: 192.168.140.30 prefix-length: 24 dhcp: false dns-resolver: config: server: - 192.168.126.1 routes: config: - destination: 0.0.0.0/0 next-hop-address: 192.168.111.1 next-hop-interface: eth1 table-id: 254 - destination: 0.0.0.0/0 next-hop-address: 192.168.140.1 next-hop-interface: eth1 table-id: 254 interfaces: - name: \"eth0\" macAddress: \"02:00:00:80:12:14\" - name: \"eth1\" macAddress: \"02:00:00:80:12:15\"", "apiVersion: agent-install.openshift.io/v1beta1 kind: InfraEnv metadata: name: myinfraenv namespace: demo-worker4 spec: clusterRef: name: single-node 1 namespace: demo-worker4 2 pullSecretRef: name: pull-secret sshAuthorizedKey: <your_public_key_here> nmStateConfigLabelSelector: matchLabels: demo-nmstate-label: value proxy: httpProxy: http://USERNAME:[email protected]:PORT httpsProxy: https://USERNAME:[email protected]:PORT noProxy: .example.com,172.22.0.0/24,10.10.0.0/24", "curl --insecure -o image.iso USD(kubectl -n sample-namespace get infraenvs.agent-install.openshift.io myinfraenv -o=jsonpath=\"{.status.isoDownloadURL}\")", "-n sample-namespace patch agents.agent-install.openshift.io 07e80ea9-200c-4f82-aff4-4932acb773d4 -p '{\"spec\":{\"approved\":true}}' --type merge", "apiVersion: v1 kind: Proxy baseDomain: <domain> proxy: httpProxy: http://<username>:<password>@<proxy.example.com>:<port> httpsProxy: https://<username>:<password>@<proxy.example.com>:<port> noProxy: <wildcard-of-domain>,<provisioning-network/CIDR>,<BMC-address-range/CIDR>", "apiVersion: extensions.hive.openshift.io/v1beta1 kind: AgentClusterInstall spec: proxy: httpProxy: http://<username>:<password>@<proxy.example.com>:<port> 1 httpsProxy: https://<username>:<password>@<proxy.example.com>:<port> 2 noProxy: <wildcard-of-domain>,<provisioning-network/CIDR>,<BMC-address-range/CIDR> 3", "create secret generic pull-secret -n <open-cluster-management> --from-file=.dockerconfigjson=<path-to-pull-secret> --type=kubernetes.io/dockerconfigjson", "edit managedcluster <cluster-name>", "spec: hubAcceptsClient: true managedClusterClientConfigs: - url: <https://api.new-managed.dev.redhat.com> 1", "apiVersion: cluster.open-cluster-management.io/v1 kind: ManagedCluster metadata: annotations: open-cluster-management/nodeSelector: '{\"dedicated\":\"acm\"}' open-cluster-management/tolerations: '[{\"key\":\"dedicated\",\"operator\":\"Equal\",\"value\":\"acm\",\"effect\":\"NoSchedule\"}]'", "apiVersion: config.open-cluster-management.io/v1alpha1 kind: KlusterletConfig metadata: name: <klusterletconfigName> spec: nodePlacement: nodeSelector: dedicated: acm tolerations: - key: dedicated operator: Equal value: acm effect: NoSchedule", "delete po -n open-cluster-management `oc get pod -n open-cluster-management | grep multiclusterhub-operator| cut -d' ' -f1`", "login", "new-project <cluster_name>", "apiVersion: cluster.open-cluster-management.io/v1 kind: ManagedCluster metadata: name: <cluster_name> labels: cloud: auto-detect vendor: auto-detect spec: hubAcceptsClient: true", "cloud: Amazon vendor: OpenShift", "apply -f managed-cluster.yaml", "apiVersion: v1 kind: Secret metadata: name: auto-import-secret namespace: <cluster_name> stringData: autoImportRetry: \"5\" # If you are using the kubeconfig file, add the following value for the kubeconfig file # that has the current context set to the cluster to import: kubeconfig: |- <kubeconfig_file> # If you are using the token/server pair, add the following two values instead of # the kubeconfig file: token: <Token to access the cluster> server: <cluster_api_url> type: Opaque", "apply -f auto-import-secret.yaml", "-n <cluster_name> annotate secrets auto-import-secret managedcluster-import-controller.open-cluster-management.io/keeping-auto-import-secret=\"\"", "get managedcluster <cluster_name>", "login", "get pod -n open-cluster-management-agent", "get secret <cluster_name>-import -n <cluster_name> -o jsonpath={.data.crds\\\\.yaml} | base64 --decode > klusterlet-crd.yaml", "get secret <cluster_name>-import -n <cluster_name> -o jsonpath={.data.import\\\\.yaml} | base64 --decode > import.yaml", "login", "apply -f klusterlet-crd.yaml", "apply -f import.yaml", "get managedcluster <cluster_name>", "apiVersion: agent.open-cluster-management.io/v1 kind: KlusterletAddonConfig metadata: name: <cluster_name> namespace: <cluster_name> spec: applicationManager: enabled: true certPolicyController: enabled: true policyController: enabled: true searchCollector: enabled: true", "apply -f klusterlet-addon-config.yaml", "get pod -n open-cluster-management-agent-addon", "delete managedcluster <cluster_name>", "export agent_registration_host=USD(oc get route -n multicluster-engine agent-registration -o=jsonpath=\"{.spec.host}\")", "get configmap -n kube-system kube-root-ca.crt -o=jsonpath=\"{.data['ca\\.crt']}\" > ca.crt_", "apiVersion: v1 kind: ServiceAccount metadata: name: managed-cluster-import-agent-registration-sa namespace: multicluster-engine --- apiVersion: v1 kind: Secret type: kubernetes.io/service-account-token metadata: name: managed-cluster-import-agent-registration-sa-token namespace: multicluster-engine annotations: kubernetes.io/service-account.name: \"managed-cluster-import-agent-registration-sa\" --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: managedcluster-import-controller-agent-registration-client rules: - nonResourceURLs: [\"/agent-registration/*\"] verbs: [\"get\"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: managed-cluster-import-agent-registration roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: managedcluster-import-controller-agent-registration-client subjects: - kind: ServiceAccount name: managed-cluster-import-agent-registration-sa namespace: multicluster-engine", "export token=USD(oc get secret -n multicluster-engine managed-cluster-import-agent-registration-sa-token -o=jsonpath='{.data.token}' | base64 -d)", "patch clustermanager cluster-manager --type=merge -p '{\"spec\":{\"registrationConfiguration\":{\"featureGates\":[ {\"feature\": \"ManagedClusterAutoApproval\", \"mode\": \"Enable\"}], \"autoApproveUsers\":[\"system:serviceaccount:multicluster-engine:agent-registration-bootstrap\"]}}}'", "curl --cacert ca.crt -H \"Authorization: Bearer USDtoken\" https://USDagent_registration_host/agent-registration/crds/v1 | oc apply -f -", "curl --cacert ca.crt -H \"Authorization: Bearer USDtoken\" https://USDagent_registration_host/agent-registration/manifests/<clusterName>?klusterletconfig=<klusterletconfigName>&duration=<duration> | oc apply -f -", "apiVersion: v1 kind: Namespace metadata: name: managed-cluster", "apiVersion: hive.openshift.io/v1 kind: ClusterImageSet metadata: name: openshift-v4.15 spec: releaseImage: quay.io/openshift-release-dev/ocp-release@sha256:22e149142517dfccb47be828f012659b1ccf71d26620e6f62468c264a7ce7863", "apiVersion: v1 kind: Secret type: kubernetes.io/dockerconfigjson metadata: name: pull-secret namespace: managed-cluster stringData: .dockerconfigjson: <pull-secret-json> 1", "get secret -n openshift-kube-apiserver node-kubeconfigs -ojson | jq '.data[\"lb-ext.kubeconfig\"]' --raw-output | base64 -d > /tmp/kubeconfig.some-other-cluster", "-n managed-cluster create secret generic some-other-cluster-admin-kubeconfig --from-file=kubeconfig=/tmp/kubeconfig.some-other-cluster", "apiVersion: extensions.hive.openshift.io/v1beta1 kind: AgentClusterInstall metadata: name: <your-cluster-name> 1 namespace: <managed-cluster> spec: networking: userManagedNetworking: true clusterDeploymentRef: name: <your-cluster> imageSetRef: name: openshift-v4.11.18 provisionRequirements: controlPlaneAgents: 2 sshPublicKey: <\"\"> 3", "apiVersion: hive.openshift.io/v1 kind: ClusterDeployment metadata: name: <your-cluster-name> 1 namespace: managed-cluster spec: baseDomain: <redhat.com> 2 installed: <true> 3 clusterMetadata: adminKubeconfigSecretRef: name: <your-cluster-name-admin-kubeconfig> 4 clusterID: <\"\"> 5 infraID: <\"\"> 6 clusterInstallRef: group: extensions.hive.openshift.io kind: AgentClusterInstall name: your-cluster-name-install version: v1beta1 clusterName: your-cluster-name platform: agentBareMetal: pullSecretRef: name: pull-secret", "apiVersion: agent-install.openshift.io/v1beta1 kind: InfraEnv metadata: name: your-infraenv namespace: managed-cluster spec: clusterRef: name: your-cluster-name namespace: managed-cluster pullSecretRef: name: pull-secret sshAuthorizedKey: \"\"", "get infraenv -n managed-cluster some-other-infraenv -ojson | jq \".status.<url>\" --raw-output | xargs curl -k -o /storage0/isos/some-other.iso", "-kubeconfig <source_hub_kubeconfig> -n <managed_cluster_name> get <resource_name> <cluster_provisioning_namespace> -oyaml > <resource_name>.yaml", "yq --in-place -y 'del(.metadata.ownerReferences)' AgentClusterInstall.yaml", "yq --in-place -y 'del(.metadata.ownerReferences)' AdminKubeconfigSecret.yaml", "-kubeconfig <target_hub_kubeconfig> delete ManagedCluster <cluster_name>", "-kubeconfig <target_hub_kubeconfig> apply -f <resource_name>.yaml", "apiVersion: imageregistry.open-cluster-management.io/v1alpha1 kind: ManagedClusterImageRegistry metadata: name: <imageRegistryName> namespace: <namespace> spec: placementRef: group: cluster.open-cluster-management.io resource: placements name: <placementName> 1 pullSecret: name: <pullSecretName> 2 registries: 3 - mirror: <mirrored-image-registry-address> source: <image-registry-address> - mirror: <mirrored-image-registry-address> source: <image-registry-address>", "registries: - mirror: localhost:5000/rhacm2/ source: registry.redhat.io/rhacm2 - mirror: localhost:5000/multicluster-engine source: registry.redhat.io/multicluster-engine", "registries: - mirror: localhost:5000/rhacm2-registration-rhel8-operator source: registry.redhat.io/rhacm2/registration-rhel8-operator", "apiVersion: config.open-cluster-management.io/v1alpha1 kind: KlusterletConfig metadata: name: <klusterletconfigName> spec: pullSecret: namespace: <pullSecretNamespace> name: <pullSecretName> registries: - mirror: <mirrored-image-registry-address> source: <image-registry-address> - mirror: <mirrored-image-registry-address> source: <image-registry-address>", "kubectl create secret docker-registry myPullSecret --docker-server=<your-registry-server> --docker-username=<my-name> --docker-password=<my-password>", "apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Placement metadata: name: myPlacement namespace: myNamespace spec: clusterSets: - myClusterSet tolerations: - key: \"cluster.open-cluster-management.io/unreachable\" operator: Exists", "apiVersion: cluster.open-cluster-management.io/v1beta2 kind: ManagedClusterSet metadata: name: myClusterSet --- apiVersion: cluster.open-cluster-management.io/v1beta2 kind: ManagedClusterSetBinding metadata: name: myClusterSet namespace: myNamespace spec: clusterSet: myClusterSet", "apiVersion: imageregistry.open-cluster-management.io/v1alpha1 kind: ManagedClusterImageRegistry metadata: name: myImageRegistry namespace: myNamespace spec: placementRef: group: cluster.open-cluster-management.io resource: placements name: myPlacement pullSecret: name: myPullSecret registry: myRegistryAddress", "get machinepools -n <managed-cluster-namespace>", "edit machinepool <MachinePool-resource-name> -n <managed-cluster-namespace>", "get machinepools -n <managed-cluster-namespace>", "edit machinepool <name-of-MachinePool-resource> -n <namespace-of-managed-cluster>", "get machinepools.hive.openshift.io -n <managed-cluster-namespace>", "edit machinepool.hive.openshift.io <MachinePool-resource-name> -n <managed-cluster-namespace>", "export <kubeconfig_name>=USD(oc get cd USD<cluster_name> -o \"jsonpath={.spec.clusterMetadata.adminKubeconfigSecretRef.name}\") extract secret/USD<kubeconfig_name> --keys=kubeconfig --to=- > original-kubeconfig --kubeconfig=original-kubeconfig get node", "Unable to connect to the server: tls: failed to verify certificate: x509: certificate signed by unknown authority", "echo <base64 encoded blob> | base64 --decode > decoded-existing-certs.pem", "cp original-kubeconfig <new_kubeconfig_name>", "cat decoded-existing-certs.pem new-ca-certificate.pem | openssl base64 -A", "KUBECONFIG=<new_kubeconfig_name> oc get nodes", "patch secret USDoriginal-kubeconfig --type='json' -p=\"[{'op': 'replace', 'path': '/data/kubeconfig', 'value': 'USD(openssl base64 -A -in <new_kubeconfig_name>)'},{'op': 'replace', 'path': '/data/raw-kubeconfig', 'value': 'USD(openssl base64 -A -in <new_kubeconfig_name>)'}]\"", "watch -n 5 \"oc get agent -n managed-cluster\"", "get agent -n managed-cluster -ojson | jq -r '.items[] | select(.spec.approved==false) |select(.spec.clusterDeploymentName==null) | .metadata.name'| xargs oc -n managed-cluster patch -p '{\"spec\":{\"clusterDeploymentName\":{\"name\":\"some-other-cluster\",\"namespace\":\"managed-cluster\"}}}' --type merge agent", "get agent -n managed-cluster -ojson | jq -r '.items[] | select(.spec.approved==false) | .metadata.name'| xargs oc -n managed-cluster patch -p '{\"spec\":{\"approved\":true}}' --type merge agent", "patch agent <AGENT-NAME> -p '{\"spec\":{\"role\": \"master\"}}' --type=merge", "bmac.agent-install.openshift.io/role: master", "patch agent <AGENT-NAME> -p '{\"spec\":{\"role\": \"master\"}}' --type=merge", "edit clusterdeployment <name-of-cluster> -n <namespace-of-cluster>", "get clusterdeployment <name-of-cluster> -n <namespace-of-cluster> -o yaml", "edit clusterdeployment <name-of-cluster> -n <namespace-of-cluster>", "get clusterdeployment <name-of-cluster> -n <namespace-of-cluster> -o yaml", "UPSTREAM_REGISTRY=quay.io PRODUCT_REPO=openshift-release-dev RELEASE_NAME=ocp-release OCP_RELEASE=4.12.2-x86_64 LOCAL_REGISTRY=USD(hostname):5000 LOCAL_SECRET_JSON=/path/to/pull/secret 1 adm -a USD{LOCAL_SECRET_JSON} release mirror --from=USD{UPSTREAM_REGISTRY}/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE} --to=USD{LOCAL_REGISTRY}/ocp4 --to-release-image=USD{LOCAL_REGISTRY}/ocp4/release:USD{OCP_RELEASE}", "git clone https://github.com/openshift/cincinnati-graph-data", "FROM registry.access.redhat.com/ubi8/ubi:8.1 1 RUN curl -L -o cincinnati-graph-data.tar.gz https://github.com/openshift/cincinnati-graph-data/archive/master.tar.gz 2 RUN mkdir -p /var/lib/cincinnati/graph-data/ 3 CMD exec /bin/bash -c \"tar xvzf cincinnati-graph-data.tar.gz -C /var/lib/ cincinnati/graph-data/ --strip-components=1\" 4", "build -f <path_to_Dockerfile> -t <USD{DISCONNECTED_REGISTRY}/cincinnati/cincinnati-graph-data-container>:latest 1 2 push <USD{DISCONNECTED_REGISTRY}/cincinnati/cincinnati-graph-data-container><2>:latest --authfile=</path/to/pull_secret>.json 3", "apiVersion: v1 kind: ConfigMap metadata: name: trusted-ca data: updateservice-registry: | -----BEGIN CERTIFICATE----- -----END CERTIFICATE-----", "patch image.config.openshift.io cluster -p '{\"spec\":{\"additionalTrustedCA\":{\"name\":\"trusted-ca\"}}}' --type merge", "apiVersion: cincinnati.openshift.io/v1beta2 kind: Cincinnati metadata: name: openshift-update-service-instance namespace: openshift-cincinnati spec: registry: <registry_host_name>:<port> 1 replicas: 1 repository: USD{LOCAL_REGISTRY}/ocp4/release graphDataImage: '<host_name>:<port>/cincinnati-graph-data-container' 2", "apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: <your-local-mirror-name> 1 spec: repositoryDigestMirrors: - mirrors: - <your-registry> 2 source: registry.redhat.io", "apply -f mirror.yaml", "apiVersion: config.openshift.io/v1 kind: OperatorHub metadata: name: cluster spec: disableAllDefaultSources: true --- apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog namespace: openshift-marketplace spec: sourceType: grpc image: '<registry_host_name>:<port>/olm/redhat-operators:v1' 1 displayName: My Operator Catalog publisher: grpc", "apply -f source.yaml", "get clusterversion -o yaml", "apiVersion: v1 items: - apiVersion: config.openshift.io/v1 kind: ClusterVersion [..] spec: channel: stable-4.x upstream: https://api.openshift.com/api/upgrades_info/v1/graph", "get routes", "edit clusterversion version", "get routes -A", "get clusterversion -o yaml", "apiVersion: v1 items: - apiVersion: config.openshift.io/v1 kind: ClusterVersion [..] spec: channel: stable-4.x upstream: https://<hub-cincinnati-uri>/api/upgrades_info/v1/graph", "export KUBECONFIG=<managed-cluster-kubeconfig>", "create role -n default test-role --verb=list,get --resource=pods create rolebinding -n default test-rolebinding --serviceaccount=default:default --role=test-role", "get secret -n default | grep <default-token>", "export MANAGED_CLUSTER_TOKEN=USD(kubectl -n default get secret <default-token> -o jsonpath={.data.token} | base64 -d)", "config view --minify --raw=true > cluster-proxy.kubeconfig", "export TARGET_MANAGED_CLUSTER=<managed-cluster-name> export NEW_SERVER=https://USD(oc get route -n multicluster-engine cluster-proxy-addon-user -o=jsonpath='{.spec.host}')/USDTARGET_MANAGED_CLUSTER sed -i'' -e '/server:/c\\ server: '\"USDNEW_SERVER\"'' cluster-proxy.kubeconfig export CADATA=USD(oc get configmap -n openshift-service-ca kube-root-ca.crt -o=go-template='{{index .data \"ca.crt\"}}' | base64) sed -i'' -e '/certificate-authority-data:/c\\ certificate-authority-data: '\"USDCADATA\"'' cluster-proxy.kubeconfig", "sed -i'' -e '/client-certificate-data/d' cluster-proxy.kubeconfig sed -i'' -e '/client-key-data/d' cluster-proxy.kubeconfig sed -i'' -e '/token/d' cluster-proxy.kubeconfig", "sed -i'' -e 'USDa\\ token: '\"USDMANAGED_CLUSTER_TOKEN\"'' cluster-proxy.kubeconfig", "get pods --kubeconfig=cluster-proxy.kubeconfig -n <default>", "export PROMETHEUS_TOKEN=USD(kubectl get secret -n openshift-monitoring USD(kubectl get serviceaccount -n openshift-monitoring prometheus-k8s -o=jsonpath='{.secrets[0].name}') -o=jsonpath='{.data.token}' | base64 -d)", "get configmap kube-root-ca.crt -o=jsonpath='{.data.ca\\.crt}' > hub-ca.crt", "export SERVICE_NAMESPACE=openshift-monitoring export SERVICE_NAME=prometheus-k8s export SERVICE_PORT=9091 export SERVICE_PATH=\"api/v1/query?query=machine_cpu_sockets\" curl --cacert hub-ca.crt USDNEW_SERVER/api/v1/namespaces/USDSERVICE_NAMESPACE/services/USDSERVICE_NAME:USDSERVICE_PORT/proxy-service/USDSERVICE_PATH -H \"Authorization: Bearer USDPROMETHEUS_TOKEN\"", "apiVersion: addon.open-cluster-management.io/v1alpha1 kind: AddOnDeploymentConfig metadata: name: <name> 1 namespace: <namespace> 2 spec: agentInstallNamespace: open-cluster-management-agent-addon proxyConfig: httpsProxy: \"http://<username>:<password>@<ip>:<port>\" 3 noProxy: \".cluster.local,.svc,172.30.0.1\" 4 caBundle: <value> 5", "apiVersion: addon.open-cluster-management.io/v1alpha1 kind: ManagedClusterAddOn metadata: name: cluster-proxy namespace: <namespace> 1 spec: installNamespace: open-cluster-management-addon configs: group: addon.open-cluster-management.io resource: addondeploymentconfigs name: <name> 2 namespace: <namespace> 3", "operation: retryPosthook: installPosthook", "operation: retryPosthook: upgradePosthook", "apiVersion: cluster.open-cluster-management.io/v1beta1 kind: ClusterCurator metadata: name: test-inno namespace: test-inno spec: desiredCuration: upgrade destroy: {} install: {} scale: {} upgrade: channel: stable-4.x desiredUpdate: 4.x.1 monitorTimeout: 150 posthook: - extra_vars: {} clusterName: test-inno type: post_check name: ACM Upgrade Checks prehook: - extra_vars: {} clusterName: test-inno type: pre_check name: ACM Upgrade Checks towerAuthSecret: awx inventory: Demo Inventory", "apiVersion: cluster.open-cluster-management.io/v1beta1 kind: ClusterCurator metadata: name: cluster1 {{{} namespace: cluster1 labels: test1: test1 test2: test2 {}}}spec: desiredCuration: install install: jobMonitorTimeout: 5 posthook: - extra_vars: {} name: Demo Job Template type: Job prehook: - extra_vars: {} name: Demo Job Template type: Job towerAuthSecret: toweraccess", "spec: desiredCuration: upgrade upgrade: intermediateUpdate: 4.14.x desiredUpdate: 4.15.x monitorTimeout: 120", "posthook: - extra_vars: {} name: Unpause machinepool type: Job prehook: - extra_vars: {} name: Pause machinepool type: Job", "apiVersion: cluster.open-cluster-management.io/v1beta1 kind: ClusterCurator metadata: annotations: cluster.open-cluster-management.io/upgrade-clusterversion-backoff-limit: \"10\" name: your-name namespace: your-namespace spec: desiredCuration: upgrade upgrade: intermediateUpdate: 4.14.x desiredUpdate: 4.15.x monitorTimeout: 120 posthook: - extra_vars: {} name: Unpause machinepool type: Job prehook: - extra_vars: {} name: Pause machinepool type: Job", "apiVersion: hypershift.openshift.io/v1beta1 kind: HostedCluster metadata: name: my-cluster namespace: clusters spec: pausedUntil: 'true'", "apiVersion: hypershift.openshift.io/v1beta1 kind: NodePool metadata: name: my-cluster-us-east-2 namespace: clusters spec: pausedUntil: 'true'", "apiVersion: cluster.open-cluster-management.io/v1beta1 kind: ClusterCurator metadata: name: my-cluster namespace: clusters labels: open-cluster-management: curator spec: desiredCuration: install install: jobMonitorTimeout: 5 prehook: - name: Demo Job Template extra_vars: variable1: something-interesting variable2: 2 - name: Demo Job Template posthook: - name: Demo Job Template towerAuthSecret: toweraccess", "apiVersion: v1 kind: Secret metadata: name: toweraccess namespace: clusters stringData: host: https://my-tower-domain.io token: ANSIBLE_TOKEN_FOR_admin", "apiVersion: cluster.open-cluster-management.io/v1beta1 kind: ClusterCurator metadata: name: my-cluster namespace: clusters labels: open-cluster-management: curator spec: desiredCuration: upgrade upgrade: desiredUpdate: 4.15.1 1 monitorTimeout: 120 prehook: - name: Demo Job Template extra_vars: variable1: something-interesting variable2: 2 - name: Demo Job Template posthook: - name: Demo Job Template towerAuthSecret: toweraccess", "apiVersion: cluster.open-cluster-management.io/v1beta1 kind: ClusterCurator metadata: name: my-cluster namespace: clusters labels: open-cluster-management: curator spec: desiredCuration: destroy destroy: jobMonitorTimeout: 5 prehook: - name: Demo Job Template extra_vars: variable1: something-interesting variable2: 2 - name: Demo Job Template posthook: - name: Demo Job Template towerAuthSecret: toweraccess", "apiVersion: cluster.open-cluster-management.io/v1alpha1 kind: ClusterClaim metadata: name: id.openshift.io spec: value: 95f91f25-d7a2-4fc3-9237-2ef633d8451c", "apiVersion: cluster.open-cluster-management.io/v1 kind: ManagedCluster metadata: labels: cloud: Amazon clusterID: 95f91f25-d7a2-4fc3-9237-2ef633d8451c installer.name: multiclusterhub installer.namespace: open-cluster-management name: cluster1 vendor: OpenShift name: cluster1 spec: hubAcceptsClient: true leaseDurationSeconds: 60 status: allocatable: cpu: '15' memory: 65257Mi capacity: cpu: '18' memory: 72001Mi clusterClaims: - name: id.k8s.io value: cluster1 - name: kubeversion.open-cluster-management.io value: v1.18.3+6c42de8 - name: platform.open-cluster-management.io value: AWS - name: product.open-cluster-management.io value: OpenShift - name: id.openshift.io value: 95f91f25-d7a2-4fc3-9237-2ef633d8451c - name: consoleurl.openshift.io value: 'https://console-openshift-console.apps.xxxx.dev04.red-chesterfield.com' - name: version.openshift.io value: '4.x' conditions: - lastTransitionTime: '2020-10-26T07:08:49Z' message: Accepted by hub cluster admin reason: HubClusterAdminAccepted status: 'True' type: HubAcceptedManagedCluster - lastTransitionTime: '2020-10-26T07:09:18Z' message: Managed cluster joined reason: ManagedClusterJoined status: 'True' type: ManagedClusterJoined - lastTransitionTime: '2020-10-30T07:20:20Z' message: Managed cluster is available reason: ManagedClusterAvailable status: 'True' type: ManagedClusterConditionAvailable version: kubernetes: v1.18.3+6c42de8", "apiVersion: cluster.open-cluster-management.io/v1alpha1 kind: ClusterClaim metadata: name: <custom_claim_name> spec: value: <custom_claim_value>", "get clusterclaims.cluster.open-cluster-management.io", "apiVersion: cluster.open-cluster-management.io/v1beta2 kind: ManagedClusterSetBinding metadata: name: global namespace: open-cluster-management-global-set spec: clusterSet: global", "apiVersion: cluster.open-cluster-management.io/v1beta2 kind: ManagedClusterSet metadata: name: <cluster_set>", "kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: clusterrole1 rules: - apiGroups: [\"cluster.open-cluster-management.io\"] resources: [\"managedclustersets/join\"] resourceNames: [\"<cluster_set>\"] verbs: [\"create\"]", "apiVersion: cluster.open-cluster-management.io/v1 kind: ManagedCluster metadata: name: <cluster_name> spec: hubAcceptsClient: true", "apiVersion: cluster.open-cluster-management.io/v1 kind: ManagedCluster metadata: name: <cluster_name> labels: cluster.open-cluster-management.io/clusterset: <cluster_set> spec: hubAcceptsClient: true", "apiVersion: cluster.open-cluster-management.io/v1beta2 kind: ManagedClusterSetBinding metadata: namespace: <namespace> name: <cluster_set> spec: clusterSet: <cluster_set>", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: <clusterrole> rules: - apiGroups: [\"cluster.open-cluster-management.io\"] resources: [\"managedclustersets/bind\"] resourceNames: [\"<cluster_set>\"] verbs: [\"create\"]", "patch managedcluster <managed_cluster_name> -p '{\"spec\":{\"taints\":[{\"key\": \"key\", \"value\": \"value\", \"effect\": \"NoSelect\"}]}}' --type=merge", "patch managedcluster <managed_cluster_name> --type='json' -p='[{\"op\": \"add\", \"path\": \"/spec/taints/-\", \"value\": {\"key\": \"key\", \"value\": \"value\", \"effect\": \"NoSelect\"}}]'", "apiVersion: cluster.open-cluster-management.io/v1 kind: ManagedCluster metadata: name: cluster1 spec: hubAcceptsClient: true taints: - effect: NoSelect key: cluster.open-cluster-management.io/unavailable timeAdded: '2022-02-21T08:11:54Z'", "apiVersion: cluster.open-cluster-management.io/v1 kind: ManagedCluster metadata: name: cluster1 spec: hubAcceptsClient: true taints: - effect: NoSelect key: cluster.open-cluster-management.io/unreachable timeAdded: '2022-02-21T08:11:06Z'", "apiVersion: cluster.open-cluster-management.io/v1 kind: ManagedCluster metadata: name: cluster1 spec: hubAcceptsClient: true taints: - effect: NoSelect key: gpu value: \"true\" timeAdded: '2022-02-21T08:11:06Z'", "apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Placement metadata: name: placement1 namespace: default spec: tolerations: - key: gpu value: \"true\" operator: Equal", "apiVersion: cluster.open-cluster-management.io/v1 kind: ManagedCluster metadata: name: cluster1 spec: hubAcceptsClient: true taints: - effect: NoSelect key: cluster.open-cluster-management.io/unreachable timeAdded: '2022-02-21T08:11:06Z'", "apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Placement metadata: name: demo4 namespace: demo1 spec: tolerations: - key: cluster.open-cluster-management.io/unreachable operator: Exists tolerationSeconds: 300", "get managedclusters -l cluster.open-cluster-management.io/clusterset=<cluster_set>", "labels: cluster.open-cluster-management.io/clusterset: clusterset1", "apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Placement metadata: name: placement namespace: ns1 spec: predicates: - requiredClusterSelector: labelSelector: matchLabels: vendor: OpenShift", "apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Placement metadata: name: placement namespace: ns1 spec: predicates: - requiredClusterSelector: claimSelector: matchExpressions: - key: region.open-cluster-management.io operator: In values: - us-west-1", "apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Placement metadata: name: placement namespace: ns1 spec: clusterSets: - clusterset1 - clusterset2 predicates: - requiredClusterSelector: claimSelector: matchExpressions: - key: region.open-cluster-management.io operator: In values: - us-west-1", "apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Placement metadata: name: placement namespace: ns1 spec: numberOfClusters: 3 1 predicates: - requiredClusterSelector: labelSelector: matchLabels: vendor: OpenShift claimSelector: matchExpressions: - key: region.open-cluster-management.io operator: In values: - us-west-1", "apiVersion: cluster.open-cluster-management.io/v1 kind: ManagedCluster metadata: name: cluster1 spec: hubAcceptsClient: true taints: - effect: NoSelect key: gpu value: \"true\" timeAdded: '2022-02-21T08:11:06Z'", "apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Placement metadata: name: placement namespace: ns1 spec: tolerations: - key: gpu value: \"true\" operator: Equal", "apiVersion: cluster.open-cluster-management.io/v1 kind: ManagedCluster metadata: name: cluster1 spec: hubAcceptsClient: true taints: - effect: NoSelect key: cluster.open-cluster-management.io/unreachable timeAdded: '2022-02-21T08:11:06Z'", "apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Placement metadata: name: placement namespace: ns1 spec: tolerations: - key: cluster.open-cluster-management.io/unreachable operator: Exists tolerationSeconds: 300 1", "feature.open-cluster-management.io/addon-application-manager: available", "apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Placement metadata: name: placement1 namespace: ns1 spec: predicates: - requiredClusterSelector: labelSelector: matchExpressions: - key: feature.open-cluster-management.io/addon-application-manager operator: Exists", "apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Placement metadata: name: placement2 namespace: ns1 spec: predicates: - requiredClusterSelector: labelSelector: matchLabels: \"feature.open-cluster-management.io/addon-application-manager\": \"available\"", "apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Placement metadata: name: placement3 namespace: ns1 spec: predicates: - requiredClusterSelector: labelSelector: matchExpressions: - key: feature.open-cluster-management.io/addon-application-manager operator: DoesNotExist", "apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Placement metadata: name: placement namespace: ns1 spec: numberOfClusters: 1 prioritizerPolicy: configurations: - scoreCoordinate: builtIn: ResourceAllocatableMemory", "apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Placement metadata: name: placement namespace: ns1 spec: numberOfClusters: 1 prioritizerPolicy: configurations: - scoreCoordinate: builtIn: ResourceAllocatableCPU weight: 2 - scoreCoordinate: builtIn: ResourceAllocatableMemory weight: 2", "apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Placement metadata: name: placement namespace: ns1 spec: numberOfClusters: 2 prioritizerPolicy: mode: Exact configurations: - scoreCoordinate: builtIn: Steady weight: 3 - scoreCoordinate: type: AddOn addOn: resourceName: default scoreName: cpuratio", "apiVersion: cluster.open-cluster-management.io/v1beta1 kind: PlacementDecision metadata: labels: cluster.open-cluster-management.io/placement: placement1 name: placement1-kbc7q namespace: ns1 ownerReferences: - apiVersion: cluster.open-cluster-management.io/v1beta1 blockOwnerDeletion: true controller: true kind: Placement name: placement1 uid: 05441cf6-2543-4ecc-8389-1079b42fe63e status: decisions: - clusterName: cluster1 reason: '' - clusterName: cluster2 reason: '' - clusterName: cluster3 reason: ''", "kind: ClusterClaim metadata: annotations: cluster.open-cluster-management.io/createmanagedcluster: \"false\" 1", "apiVersion: addon.open-cluster-management.io/v1alpha1 kind: ManagedClusterAddOn metadata: name: managed-serviceaccount namespace: <target_managed_cluster> spec: installNamespace: open-cluster-management-agent-addon", "apply -f -", "apiVersion: authentication.open-cluster-management.io/v1alpha1 kind: ManagedServiceAccount metadata: name: <managedserviceaccount_name> namespace: <target_managed_cluster> spec: rotation: {}", "get managedserviceaccount <managed_serviceaccount_name> -n <target_managed_cluster> -o yaml", "get secret <managed_serviceaccount_name> -n <target_managed_cluster> -o yaml", "apiVersion: config.openshift.io/v1 kind: APIServer metadata: name: cluster spec: audit: profile: Default servingCerts: namedCertificates: - names: - api.mycluster.example.com servingCertificate: name: old-cert-secret", "cp old.crt combined.crt", "cat new.crt >> combined.crt", "create secret tls combined-certs-secret --cert=combined.crt --key=old.key -n openshift-config", "apiVersion: config.openshift.io/v1 kind: APIServer metadata: name: cluster spec: audit: profile: Default servingCerts: namedCertificates: - names: - api.mycluster.example.com servingCertificate: name: combined-cert-secret", "create secret tls new-cert-secret --cert=new.crt --key=new.key -n openshift-config {code}", "apiVersion: config.openshift.io/v1 kind: APIServer metadata: name: cluster spec: audit: profile: Default servingCerts: namedCertificates: - names: - api.mycluster.example.com servingCertificate: name: new-cert-secret", "apiVersion: config.open-cluster-management.io/v1alpha1 kind: KlusterletConfig metadata: name: http-proxy spec: hubKubeAPIServerConfig: proxyURL: \"http://<username>:<password>@<ip>:<port>\"", "apiVersion: config.open-cluster-management.io/v1alpha1 kind: KlusterletConfig metadata: name: https-proxy spec: hubKubeAPIServerConfig: proxyURL: \"https://<username>:<password>@<ip>:<port>\" trustedCABundles: - name: \"proxy-ca-bundle\" caBundle: name: <configmap-name> namespace: <configmap-namespace>", "create -n <configmap-namespace> configmap <configmap-name> --from-file=ca.crt=/path/to/ca/file", "apiVersion: cluster.open-cluster-management.io/v1 kind: ManagedCluster metadata: annotations: agent.open-cluster-management.io/klusterlet-config: <klusterlet-config-name> name:<managed-cluster-name> spec: hubAcceptsClient: true leaseDurationSeconds: 60", "apiVersion: cluster.open-cluster-management.io/v1 kind: ManagedCluster metadata: annotations: open-cluster-management/nodeSelector: '{\"dedicated\":\"acm\"}' open-cluster-management/tolerations: '[{\"key\":\"dedicated\",\"operator\":\"Equal\",\"value\":\"acm\",\"effect\":\"NoSchedule\"}]'", "apiVersion: config.open-cluster-management.io/v1alpha1 kind: KlusterletConfig metadata: name: <name> 1 spec: hubKubeAPIServerConfig: url: \"https://api.example.com:6443\" 2 serverVerificationStrategy: UseCustomCABundles trustedCABundles: - name: <custom-ca-bundle> 3 caBundle: name: <custom-ca-bundle-configmap> 4 namespace: <multicluster-engine> 5", "apiVersion: cluster.open-cluster-management.io/v1 kind: ManagedCluster metadata: annotations: agent.open-cluster-management.io/klusterlet-config: 1 name: 2 spec: hubAcceptsClient: true leaseDurationSeconds: 60", "apiVersion: config.open-cluster-management.io/v1alpha1 kind: KlusterletConfig metadata: name: test spec: hubKubeAPIServerConfig: url: \"https://api.example.test.com:6443\" --- apiVersion: config.open-cluster-management.io/v1alpha1 kind: KlusterletConfig metadata: name: global spec: hubKubeAPIServerConfig: url: \"https://api.example.global.com:6443\"", "apiVersion: config.open-cluster-management.io/v1alpha1 kind: KlusterletConfig metadata: name: test spec: hubKubeAPIServerURL: \"\" - apiVersion: config.open-cluster-management.io/v1alpha1 kind: KlusterletConfig metadata: name: global spec: hubKubeAPIServerURL: \"example.global.com\"", "apiVersion: config.open-cluster-management.io/v1alpha1 kind: KlusterletConfig metadata: name: test - apiVersion: config.open-cluster-management.io/v1alpha1 kind: KlusterletConfig metadata: name: global spec: hubKubeAPIServerURL: \"example.global.com\"", "apiVersion: config.open-cluster-management.io/v1alpha1 kind: KlusterletConfig metadata: name: ca-strategy spec: hubKubeAPIServerConfig: serverVerificationStrategy: UseAutoDetectedCABundle trustedCABundles: - name: new-ca caBundle: name: new-ocp-ca namespace: default", "apiVersion: config.open-cluster-management.io/v1alpha1 kind: KlusterletConfig metadata: name: ca-strategy spec: hubKubeAPIServerConfig: serverVerificationStrategy: UseSystemTruststore", "apiVersion: config.open-cluster-management.io/v1alpha1 kind: KlusterletConfig metadata: name: ca-strategy spec: hubKubeAPIServerConfig: serverVerificationStrategy: UseCustomCABundles trustedCABundles: - name: ca caBundle: name: ocp-ca namespace: default", "apiVersion: config.open-cluster-management.io/v1alpha1 kind: KlusterletConfig metadata: name: global spec: hubKubeAPIServerConfig: serverVerificationStrategy: UseSystemTruststore", "apiVersion: config.open-cluster-management.io/v1alpha1 kind: KlusterletConfig metadata: name: test-ca spec: hubKubeAPIServerConfig: serverVerificationStrategy: UseCustomCABundles trustedCABundles: - name: ca caBundle: name: ocp-ca namespace: default -- apiVersion: cluster.open-cluster-management.io/v1 kind: ManagedCluster metadata: annotations: agent.open-cluster-management.io/klusterlet-config: test-ca name: cluster1 spec: hubAcceptsClient: true leaseDurationSeconds: 60", "delete po -n open-cluster-management `oc get pod -n open-cluster-management | grep multiclusterhub-operator| cut -d' ' -f1`", "delete managedcluster USDCLUSTER_NAME", "delete clusterdeployment <CLUSTER_NAME> -n USDCLUSTER_NAME", "delete po -n open-cluster-management `oc get pod -n open-cluster-management | grep multiclusterhub-operator| cut -d' ' -f1`", "open-cluster-management-agent Active 10m open-cluster-management-agent-addon Active 10m", "get klusterlet | grep klusterlet | awk '{print USD1}' | xargs oc patch klusterlet --type=merge -p '{\"metadata\":{\"finalizers\": []}}'", "delete namespaces open-cluster-management-agent open-cluster-management-agent-addon --wait=false get crds | grep open-cluster-management.io | awk '{print USD1}' | xargs oc delete crds --wait=false get crds | grep open-cluster-management.io | awk '{print USD1}' | xargs oc patch crds --type=merge -p '{\"metadata\":{\"finalizers\": []}}'", "get crds | grep open-cluster-management.io | awk '{print USD1}' get ns | grep open-cluster-management-agent", "oc rsh -n openshift-etcd etcd-control-plane-0.example.com etcdctl endpoint status --cluster -w table", "sh-4.4#etcdctl compact USD(etcdctl endpoint status --write-out=\"json\" | egrep -o '\"revision\":[0-9]*' | egrep -o '[0-9]*' -m1)", "compacted revision 158774421", "apiVersion: discovery.open-cluster-management.io/v1 kind: DiscoveryConfig metadata: name: discovery namespace: <NAMESPACE_NAME> spec: credential: <SECRET_NAME> filters: lastActive: 7 openshiftVersions: - \"4.15\"", "apiVersion: discovery.open-cluster-management.io/v1 kind: DiscoveredCluster metadata: name: fd51aafa-95a8-41f7-a992-6fb95eed3c8e namespace: <NAMESPACE_NAME> spec: activity_timestamp: \"2021-04-19T21:06:14Z\" cloudProvider: vsphere console: https://console-openshift-console.apps.qe1-vmware-pkt.dev02.red-chesterfield.com creation_timestamp: \"2021-04-19T16:29:53Z\" credential: apiVersion: v1 kind: Secret name: <SECRET_NAME> namespace: <NAMESPACE_NAME> display_name: qe1-vmware-pkt.dev02.red-chesterfield.com name: fd51aafa-95a8-41f7-a992-6fb95eed3c8e openshiftVersion: 4.15 status: Stale", "apiVersion: discovery.open-cluster-management.io/v1 kind: DiscoveredCluster metadata: name: 28c17977-fc73-4050-b5cc-a5aa2d1d6892 namespace: discovery spec: openshiftVersion: <4.x.z> isManagedCluster: false cloudProvider: aws name: 28c17977-fc73-4050-b5cc-a5aa2d1d6892 displayName: rosa-dc status: Active importAsManagedCluster: true 1 type: <supported-type> 2", "apiVersion: discovery.open-cluster-management.io/v1 kind: DiscoveredCluster metadata: annotations: discovery.open-cluster-management.io/previously-auto-imported: 'true'", "2024-06-12T14:11:43.366Z INFO reconcile Skipped automatic import for DiscoveredCluster due to existing 'discovery.open-cluster-management.io/previously-auto-imported' annotation {\"Name\": \"rosa-dc\"}", "patch discoveredcluster <name> -n <namespace> --type='json' -p='[{\"op\": \"replace\", \"path\": \"/spec/importAsManagedCluster\", \"value\": true}]'", "get managedcluster <name>", "rosa describe cluster --cluster=<cluster-name> | grep -o '^ID:.*", "get crd baremetalhosts.metal3.io", "Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io \"baremetalhosts.metal3.io\" not found", "apply -f", "get provisioning", "patch provisioning provisioning-configuration --type merge -p '{\"spec\":{\"watchAllNamespaces\": true }}'", "apiVersion: metal3.io/v1alpha1 kind: Provisioning metadata: name: provisioning-configuration spec: provisioningNetwork: \"Disabled\" watchAllNamespaces: true", "apply -f", "apiVersion: v1 kind: ConfigMap metadata: name: <mirror-config> namespace: multicluster-engine labels: app: assisted-service data: ca-bundle.crt: | <certificate-content> registries.conf: | unqualified-search-registries = [\"registry.access.redhat.com\", \"docker.io\"] [[registry]] prefix = \"\" location = \"registry.redhat.io/multicluster-engine\" mirror-by-digest-only = true [[registry.mirror]] location = \"mirror.registry.com:5000/multicluster-engine\"", "{ \"Authorization\": \"Basic xyz\" }", "{ \"api_key\": \"myexampleapikey\", }", "create secret generic -n multicluster-engine os-images-http-auth --from-file=./query_params --from-file=./headers", "-n multicluster-engine create configmap image-service-additional-ca --from-file=tls.crt", "apiVersion: agent-install.openshift.io/v1beta1 kind: AgentServiceConfig metadata: name: agent spec: databaseStorage: accessModes: - ReadWriteOnce resources: requests: storage: <db_volume_size> filesystemStorage: accessModes: - ReadWriteOnce resources: requests: storage: <fs_volume_size> mirrorRegistryRef: name: <mirror_config> 1 unauthenticatedRegistries: - <unauthenticated_registry> 2 imageStorage: accessModes: - ReadWriteOnce resources: requests: storage: <img_volume_size> 3 OSImageAdditionalParamsRef: name: os-images-http-auth OSImageCACertRef: name: image-service-additional-ca osImages: - openshiftVersion: \"<ocp_version>\" 4 version: \"<ocp_release_version>\" 5 url: \"<iso_url>\" 6 cpuArchitecture: \"x86_64\"", "apiVersion: agent-install.openshift.io/v1beta1 kind: AgentServiceConfig metadata: name: agent spec: databaseStorage: accessModes: - ReadWriteOnce resources: requests: storage: <db_volume_size> 1 filesystemStorage: accessModes: - ReadWriteOnce resources: requests: storage: <fs_volume_size> 2 imageStorage: accessModes: - ReadWriteOnce resources: requests: storage: <img_volume_size> 3", "login", "apiVersion: agent-install.openshift.io/v1beta1 kind: AgentServiceConfig metadata: annotations: agent-install.openshift.io/service-image-base: el8", "login", "get routes --all-namespaces | grep assisted-image-service", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: ingress-controller-with-nlb namespace: openshift-ingress-operator spec: domain: nlb-apps.<domain>.com routeSelector: matchLabels: router-type: nlb endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: External providerParameters: type: AWS aws: type: NLB", "apply -f ingresscontroller.yaml", "get ingresscontroller -n openshift-ingress-operator", "edit ingresscontroller <name> -n openshift-ingress-operator", "edit route assisted-image-service -n <namespace>", "metadata: labels: router-type: nlb name: assisted-image-service", "assisted-image-service-multicluster-engine.apps.<yourdomain>.com", "get pods -n multicluster-engine | grep assist", "login", "apiVersion: v1 kind: Namespace metadata: name: <your_namespace> 1", "apply -f namespace.yaml", "apiVersion: v1 kind: Secret type: kubernetes.io/dockerconfigjson metadata: name: pull-secret 1 namespace: <your_namespace> stringData: .dockerconfigjson: <your_pull_secret> 2", "apply -f pull-secret.yaml", "apiVersion: agent-install.openshift.io/v1beta1 kind: InfraEnv metadata: name: myinfraenv namespace: <your_namespace> spec: proxy: httpProxy: <http://user:password@ipaddr:port> httpsProxy: <http://user:password@ipaddr:port> noProxy: additionalNTPSources: sshAuthorizedKey: pullSecretRef: name: <name> agentLabels: <key>: <value> nmStateConfigLabelSelector: matchLabels: <key>: <value> clusterRef: name: <cluster_name> namespace: <project_name> ignitionConfigOverride: '{\"ignition\": {\"version\": \"3.1.0\"}, ...}' cpuArchitecture: x86_64 ipxeScriptType: DiscoveryImageAlways kernelArguments: - operation: append value: audit=0 additionalTrustBundle: <bundle> osImageVersion: <version>", "apply -f infra-env.yaml", "describe infraenv myinfraenv -n <your_namespace>", "apiVersion: agent-install.openshift.io/v1beta1 kind: NMStateConfig metadata: name: mynmstateconfig namespace: <your-infraenv-namespace> labels: some-key: <some-value> spec: config: interfaces: - name: eth0 type: ethernet state: up mac-address: 02:00:00:80:12:14 ipv4: enabled: true address: - ip: 192.168.111.30 prefix-length: 24 dhcp: false - name: eth1 type: ethernet state: up mac-address: 02:00:00:80:12:15 ipv4: enabled: true address: - ip: 192.168.140.30 prefix-length: 24 dhcp: false dns-resolver: config: server: - 192.168.126.1 routes: config: - destination: 0.0.0.0/0 next-hop-address: 192.168.111.1 next-hop-interface: eth1 table-id: 254 - destination: 0.0.0.0/0 next-hop-address: 192.168.140.1 next-hop-interface: eth1 table-id: 254 interfaces: - name: \"eth0\" macAddress: \"02:00:00:80:12:14\" - name: \"eth1\" macAddress: \"02:00:00:80:12:15\"", "apply -f nmstateconfig.yaml", "get infraenv -n <infra env namespace> <infra env name> -o jsonpath='{.status.isoDownloadURL}'", "https://assisted-image-service-assisted-installer.apps.example-acm-hub.com/byapikey/eyJhbGciOiJFUzI1NiIsInC93XVCJ9.eyJpbmZyYV9lbnZfaWQcTA0Y38sWVjYi02MTA0LTQ4NDMtODasdkOGIxYTZkZGM5ZTUifQ.3ydTpHaXJmTasd7uDp2NvGUFRKin3Z9Qct3lvDky1N-5zj3KsRePhAM48aUccBqmucGt3g/4.16/x86_64/minimal.iso", "get agent -n <infra env namespace>", "NAME CLUSTER APPROVED ROLE STAGE 24a92a6f-ea35-4d6f-9579-8f04c0d3591e false auto-assign", "patch agent -n <infra env namespace> <agent name> -p '{\"spec\":{\"approved\":true}}' --type merge", "get agent -n <infra env namespace>", "NAME CLUSTER APPROVED ROLE STAGE 173e3a84-88e2-4fe1-967f-1a9242503bec true auto-assign", "apiVersion: agent-install.openshift.io/v1beta1 kind: AgentServiceConfig metadata: name: agent spec: iPXEHTTPRoute: enabled", "apiVersion: v1 kind: Secret metadata: name: <bmc-secret-name> namespace: <your_infraenv_namespace> 1 type: Opaque data: username: <username> password: <password>", "apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: <bmh-name> namespace: <your-infraenv-namespace> 1 annotations: inspect.metal3.io: disabled bmac.agent-install.openshift.io/hostname: <hostname> 2 bmac.agent-install.openshift.io/role: <role> 3 labels: infraenvs.agent-install.openshift.io: <your-infraenv> 4 spec: online: true automatedCleaningMode: disabled 5 bootMACAddress: <your-mac-address> 6 bmc: address: <machine-address> 7 credentialsName: <bmc-secret-name> 8 rootDeviceHints: deviceName: /dev/sda 9", "bmac.agent-install.openshift.io/remove-agent-and-node-on-delete: true", "delete bmh <bmh-name>", "[\"--append-karg\", \"ip=192.0.2.2::192.0.2.254:255.255.255.0:core0.example.com:enp1s0:none\", \"--save-partindex\", \"4\"]", "{\"ignition\": \"version\": \"3.1.0\"}, \"storage\": {\"files\": [{\"path\": \"/tmp/example\", \"contents\": {\"source\": \"data:text/plain;base64,aGVscGltdHJhcHBlZGluYXN3YWdnZXJzcGVj\"}}]}}", "GET /cluster.open-cluster-management.io/v1/managedclusters", "POST /cluster.open-cluster-management.io/v1/managedclusters", "{ \"apiVersion\" : \"cluster.open-cluster-management.io/v1\", \"kind\" : \"ManagedCluster\", \"metadata\" : { \"labels\" : { \"vendor\" : \"OpenShift\" }, \"name\" : \"cluster1\" }, \"spec\": { \"hubAcceptsClient\": true, \"managedClusterClientConfigs\": [ { \"caBundle\": \"test\", \"url\": \"https://test.com\" } ] }, \"status\" : { } }", "GET /cluster.open-cluster-management.io/v1/managedclusters/{cluster_name}", "DELETE /cluster.open-cluster-management.io/v1/managedclusters/{cluster_name}", "\"^(?:[A-Za-z0-9+/]{4})*(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?USD\"", "GET /cluster.open-cluster-management.io/v1beta2/managedclustersets", "POST /cluster.open-cluster-management.io/v1beta2/managedclustersets", "{ \"apiVersion\" : \"cluster.open-cluster-management.io/v1beta2\", \"kind\" : \"ManagedClusterSet\", \"metadata\" : { \"name\" : \"clusterset1\" }, \"spec\": { }, \"status\" : { } }", "GET /cluster.open-cluster-management.io/v1beta2/managedclustersets/{clusterset_name}", "DELETE /cluster.open-cluster-management.io/v1beta2/managedclustersets/{clusterset_name}", "GET /cluster.open-cluster-management.io/v1beta2/namespaces/{namespace}/managedclustersetbindings", "POST /cluster.open-cluster-management.io/v1beta2/namespaces/{namespace}/managedclustersetbindings", "{ \"apiVersion\" : \"cluster.open-cluster-management.io/v1\", \"kind\" : \"ManagedClusterSetBinding\", \"metadata\" : { \"name\" : \"clusterset1\", \"namespace\" : \"ns1\" }, \"spec\": { \"clusterSet\": \"clusterset1\" }, \"status\" : { } }", "GET /cluster.open-cluster-management.io/v1beta2/namespaces/{namespace}/managedclustersetbindings/{clustersetbinding_name}", "DELETE /cluster.open-cluster-management.io/v1beta2/managedclustersetbindings/{clustersetbinding_name}", "GET /managedclusters.clusterview.open-cluster-management.io", "LIST /managedclusters.clusterview.open-cluster-management.io", "{ \"apiVersion\" : \"clusterview.open-cluster-management.io/v1alpha1\", \"kind\" : \"ClusterView\", \"metadata\" : { \"name\" : \"<user_ID>\" }, \"spec\": { }, \"status\" : { } }", "WATCH /managedclusters.clusterview.open-cluster-management.io", "GET /managedclustersets.clusterview.open-cluster-management.io", "LIST /managedclustersets.clusterview.open-cluster-management.io", "WATCH /managedclustersets.clusterview.open-cluster-management.io", "POST /authentication.open-cluster-management.io/v1beta1/managedserviceaccounts", "apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: controller-gen.kubebuilder.io/version: v0.14.0 name: managedserviceaccounts.authentication.open-cluster-management.io spec: group: authentication.open-cluster-management.io names: kind: ManagedServiceAccount listKind: ManagedServiceAccountList plural: managedserviceaccounts singular: managedserviceaccount scope: Namespaced versions: - deprecated: true deprecationWarning: authentication.open-cluster-management.io/v1alpha1 ManagedServiceAccount is deprecated; use authentication.open-cluster-management.io/v1beta1 ManagedServiceAccount; version v1alpha1 will be removed in the next release name: v1alpha1 schema: openAPIV3Schema: description: ManagedServiceAccount is the Schema for the managedserviceaccounts API properties: apiVersion: description: |- APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources type: string kind: description: |- Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds type: string metadata: type: object spec: description: ManagedServiceAccountSpec defines the desired state of ManagedServiceAccount properties: rotation: description: Rotation is the policy for rotation the credentials. properties: enabled: default: true description: |- Enabled prescribes whether the ServiceAccount token will be rotated from the upstream type: boolean validity: default: 8640h0m0s description: Validity is the duration for which the signed ServiceAccount token is valid. type: string type: object ttlSecondsAfterCreation: description: |- ttlSecondsAfterCreation limits the lifetime of a ManagedServiceAccount. If the ttlSecondsAfterCreation field is set, the ManagedServiceAccount will be automatically deleted regardless of the ManagedServiceAccount's status. When the ManagedServiceAccount is deleted, its lifecycle guarantees (e.g. finalizers) will be honored. If this field is unset, the ManagedServiceAccount won't be automatically deleted. If this field is set to zero, the ManagedServiceAccount becomes eligible for deletion immediately after its creation. In order to use ttlSecondsAfterCreation, the EphemeralIdentity feature gate must be enabled. exclusiveMinimum: true format: int32 minimum: 0 type: integer required: - rotation type: object status: description: ManagedServiceAccountStatus defines the observed state of ManagedServiceAccount properties: conditions: description: Conditions is the condition list. items: description: \"Condition contains details for one aspect of the current state of this API Resource.\\n---\\nThis struct is intended for direct use as an array at the field path .status.conditions. For example,\\n\\n\\n\\ttype FooStatus struct{\\n\\t // Represents the observations of a foo's current state.\\n\\t // Known .status.conditions.type are: \\\"Available\\\", \\\"Progressing\\\", and \\\"Degraded\\\"\\n\\t // +patchMergeKey=type\\n\\t // +patchStrategy=merge\\n\\t // +listType=map\\n\\t \\ // +listMapKey=type\\n\\t Conditions []metav1.Condition `json:\\\"conditions,omitempty\\\" patchStrategy:\\\"merge\\\" patchMergeKey:\\\"type\\\" protobuf:\\\"bytes,1,rep,name=conditions\\\"`\\n\\n\\n\\t \\ // other fields\\n\\t}\" properties: lastTransitionTime: description: |- lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. format: date-time type: string message: description: |- message is a human readable message indicating details about the transition. This may be an empty string. maxLength: 32768 type: string observedGeneration: description: |- observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. format: int64 minimum: 0 type: integer reason: description: |- reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. maxLength: 1024 minLength: 1 pattern: ^[A-Za-z]([A-Za-z0-9_,:]*[A-Za-z0-9_])?USD type: string status: description: status of the condition, one of True, False, Unknown. enum: - \"True\" - \"False\" - Unknown type: string type: description: |- type of condition in CamelCase or in foo.example.com/CamelCase. --- Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deconflict is important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt) maxLength: 316 pattern: ^([a-z0-9]([-a-z0-9]*[a-z0-9])?(\\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*/)?(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])USD type: string required: - lastTransitionTime - message - reason - status - type type: object type: array expirationTimestamp: description: ExpirationTimestamp is the time when the token will expire. format: date-time type: string tokenSecretRef: description: |- TokenSecretRef is a reference to the corresponding ServiceAccount's Secret, which stores the CA certficate and token from the managed cluster. properties: lastRefreshTimestamp: description: |- LastRefreshTimestamp is the timestamp indicating when the token in the Secret is refreshed. format: date-time type: string name: description: Name is the name of the referenced secret. type: string required: - lastRefreshTimestamp - name type: object type: object type: object served: true storage: false subresources: status: {} - name: v1beta1 schema: openAPIV3Schema: description: ManagedServiceAccount is the Schema for the managedserviceaccounts API properties: apiVersion: description: |- APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources type: string kind: description: |- Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds type: string metadata: type: object spec: description: ManagedServiceAccountSpec defines the desired state of ManagedServiceAccount properties: rotation: description: Rotation is the policy for rotation the credentials. properties: enabled: default: true description: |- Enabled prescribes whether the ServiceAccount token will be rotated before it expires. Deprecated: All ServiceAccount tokens will be rotated before they expire regardless of this field. type: boolean validity: default: 8640h0m0s description: Validity is the duration of validity for requesting the signed ServiceAccount token. type: string type: object ttlSecondsAfterCreation: description: |- ttlSecondsAfterCreation limits the lifetime of a ManagedServiceAccount. If the ttlSecondsAfterCreation field is set, the ManagedServiceAccount will be automatically deleted regardless of the ManagedServiceAccount's status. When the ManagedServiceAccount is deleted, its lifecycle guarantees (e.g. finalizers) will be honored. If this field is unset, the ManagedServiceAccount won't be automatically deleted. If this field is set to zero, the ManagedServiceAccount becomes eligible for deletion immediately after its creation. In order to use ttlSecondsAfterCreation, the EphemeralIdentity feature gate must be enabled. exclusiveMinimum: true format: int32 minimum: 0 type: integer required: - rotation type: object status: description: ManagedServiceAccountStatus defines the observed state of ManagedServiceAccount properties: conditions: description: Conditions is the condition list. items: description: \"Condition contains details for one aspect of the current state of this API Resource.\\n---\\nThis struct is intended for direct use as an array at the field path .status.conditions. For example,\\n\\n\\n\\ttype FooStatus struct{\\n\\t // Represents the observations of a foo's current state.\\n\\t // Known .status.conditions.type are: \\\"Available\\\", \\\"Progressing\\\", and \\\"Degraded\\\"\\n\\t // +patchMergeKey=type\\n\\t // +patchStrategy=merge\\n\\t // +listType=map\\n\\t \\ // +listMapKey=type\\n\\t Conditions []metav1.Condition `json:\\\"conditions,omitempty\\\" patchStrategy:\\\"merge\\\" patchMergeKey:\\\"type\\\" protobuf:\\\"bytes,1,rep,name=conditions\\\"`\\n\\n\\n\\t \\ // other fields\\n\\t}\" properties: lastTransitionTime: description: |- lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. format: date-time type: string message: description: |- message is a human readable message indicating details about the transition. This may be an empty string. maxLength: 32768 type: string observedGeneration: description: |- observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. format: int64 minimum: 0 type: integer reason: description: |- reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. maxLength: 1024 minLength: 1 pattern: ^[A-Za-z]([A-Za-z0-9_,:]*[A-Za-z0-9_])?USD type: string status: description: status of the condition, one of True, False, Unknown. enum: - \"True\" - \"False\" - Unknown type: string type: description: |- type of condition in CamelCase or in foo.example.com/CamelCase. --- Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deconflict is important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt) maxLength: 316 pattern: ^([a-z0-9]([-a-z0-9]*[a-z0-9])?(\\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*/)?(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])USD type: string required: - lastTransitionTime - message - reason - status - type type: object type: array expirationTimestamp: description: ExpirationTimestamp is the time when the token will expire. format: date-time type: string tokenSecretRef: description: |- TokenSecretRef is a reference to the corresponding ServiceAccount's Secret, which stores the CA certficate and token from the managed cluster. properties: lastRefreshTimestamp: description: |- LastRefreshTimestamp is the timestamp indicating when the token in the Secret is refreshed. format: date-time type: string name: description: Name is the name of the referenced secret. type: string required: - lastRefreshTimestamp - name type: object type: object type: object served: true storage: true subresources: status: {}", "GET /authentication.open-cluster-management.io/v1beta1/namespaces/{namespace}/managedserviceaccounts/{managedserviceaccount_name}", "DELETE /authentication.open-cluster-management.io/v1beta1/namespaces/{namespace}/managedserviceaccounts/{managedserviceaccount_name}", "POST /apis/multicluster.openshift.io/v1alpha1/multiclusterengines", "{ \"apiVersion\": \"apiextensions.k8s.io/v1\", \"kind\": \"CustomResourceDefinition\", \"metadata\": { \"annotations\": { \"controller-gen.kubebuilder.io/version\": \"v0.4.1\" }, \"creationTimestamp\": null, \"name\": \"multiclusterengines.multicluster.openshift.io\" }, \"spec\": { \"group\": \"multicluster.openshift.io\", \"names\": { \"kind\": \"MultiClusterEngine\", \"listKind\": \"MultiClusterEngineList\", \"plural\": \"multiclusterengines\", \"shortNames\": [ \"mce\" ], \"singular\": \"multiclusterengine\" }, \"scope\": \"Cluster\", \"versions\": [ { \"additionalPrinterColumns\": [ { \"description\": \"The overall state of the MultiClusterEngine\", \"jsonPath\": \".status.phase\", \"name\": \"Status\", \"type\": \"string\" }, { \"jsonPath\": \".metadata.creationTimestamp\", \"name\": \"Age\", \"type\": \"date\" } ], \"name\": \"v1alpha1\", \"schema\": { \"openAPIV3Schema\": { \"description\": \"MultiClusterEngine is the Schema for the multiclusterengines\\nAPI\", \"properties\": { \"apiVersion\": { \"description\": \"APIVersion defines the versioned schema of this representation\\nof an object. Servers should convert recognized schemas to the latest\\ninternal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\", \"type\": \"string\" }, \"kind\": { \"description\": \"Kind is a string value representing the REST resource this\\nobject represents. Servers may infer this from the endpoint the client\\nsubmits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\", \"type\": \"string\" }, \"metadata\": { \"type\": \"object\" }, \"spec\": { \"description\": \"MultiClusterEngineSpec defines the desired state of MultiClusterEngine\", \"properties\": { \"imagePullSecret\": { \"description\": \"Override pull secret for accessing MultiClusterEngine\\noperand and endpoint images\", \"type\": \"string\" }, \"nodeSelector\": { \"additionalProperties\": { \"type\": \"string\" }, \"description\": \"Set the nodeselectors\", \"type\": \"object\" }, \"targetNamespace\": { \"description\": \"Location where MCE resources will be placed\", \"type\": \"string\" }, \"tolerations\": { \"description\": \"Tolerations causes all components to tolerate any taints.\", \"items\": { \"description\": \"The pod this Toleration is attached to tolerates any\\ntaint that matches the triple <key,value,effect> using the matching\\noperator <operator>.\", \"properties\": { \"effect\": { \"description\": \"Effect indicates the taint effect to match. Empty\\nmeans match all taint effects. When specified, allowed values\\nare NoSchedule, PreferNoSchedule and NoExecute.\", \"type\": \"string\" }, \"key\": { \"description\": \"Key is the taint key that the toleration applies\\nto. Empty means match all taint keys. If the key is empty,\\noperator must be Exists; this combination means to match all\\nvalues and all keys.\", \"type\": \"string\" }, \"operator\": { \"description\": \"Operator represents a key's relationship to the\\nvalue. Valid operators are Exists and Equal. Defaults to Equal.\\nExists is equivalent to wildcard for value, so that a pod\\ncan tolerate all taints of a particular category.\", \"type\": \"string\" }, \"tolerationSeconds\": { \"description\": \"TolerationSeconds represents the period of time\\nthe toleration (which must be of effect NoExecute, otherwise\\nthis field is ignored) tolerates the taint. By default, it\\nis not set, which means tolerate the taint forever (do not\\nevict). Zero and negative values will be treated as 0 (evict\\nimmediately) by the system.\", \"format\": \"int64\", \"type\": \"integer\" }, \"value\": { \"description\": \"Value is the taint value the toleration matches\\nto. If the operator is Exists, the value should be empty,\\notherwise just a regular string.\", \"type\": \"string\" } }, \"type\": \"object\" }, \"type\": \"array\" } }, \"type\": \"object\" }, \"status\": { \"description\": \"MultiClusterEngineStatus defines the observed state of MultiClusterEngine\", \"properties\": { \"components\": { \"items\": { \"description\": \"ComponentCondition contains condition information for\\ntracked components\", \"properties\": { \"kind\": { \"description\": \"The resource kind this condition represents\", \"type\": \"string\" }, \"lastTransitionTime\": { \"description\": \"LastTransitionTime is the last time the condition\\nchanged from one status to another.\", \"format\": \"date-time\", \"type\": \"string\" }, \"message\": { \"description\": \"Message is a human-readable message indicating\\ndetails about the last status change.\", \"type\": \"string\" }, \"name\": { \"description\": \"The component name\", \"type\": \"string\" }, \"reason\": { \"description\": \"Reason is a (brief) reason for the condition's\\nlast status change.\", \"type\": \"string\" }, \"status\": { \"description\": \"Status is the status of the condition. One of True,\\nFalse, Unknown.\", \"type\": \"string\" }, \"type\": { \"description\": \"Type is the type of the cluster condition.\", \"type\": \"string\" } }, \"type\": \"object\" }, \"type\": \"array\" }, \"conditions\": { \"items\": { \"properties\": { \"lastTransitionTime\": { \"description\": \"LastTransitionTime is the last time the condition\\nchanged from one status to another.\", \"format\": \"date-time\", \"type\": \"string\" }, \"lastUpdateTime\": { \"description\": \"The last time this condition was updated.\", \"format\": \"date-time\", \"type\": \"string\" }, \"message\": { \"description\": \"Message is a human-readable message indicating\\ndetails about the last status change.\", \"type\": \"string\" }, \"reason\": { \"description\": \"Reason is a (brief) reason for the condition's\\nlast status change.\", \"type\": \"string\" }, \"status\": { \"description\": \"Status is the status of the condition. One of True,\\nFalse, Unknown.\", \"type\": \"string\" }, \"type\": { \"description\": \"Type is the type of the cluster condition.\", \"type\": \"string\" } }, \"type\": \"object\" }, \"type\": \"array\" }, \"phase\": { \"description\": \"Latest observed overall state\", \"type\": \"string\" } }, \"type\": \"object\" } }, \"type\": \"object\" } }, \"served\": true, \"storage\": true, \"subresources\": { \"status\": {} } } ] }, \"status\": { \"acceptedNames\": { \"kind\": \"\", \"plural\": \"\" }, \"conditions\": [], \"storedVersions\": [] } }", "GET /apis/multicluster.openshift.io/v1alpha1/multiclusterengines", "DELETE /apis/multicluster.openshift.io/v1alpha1/multiclusterengines/{name}", "GET /cluster.open-cluster-management.io/v1beta1/namespaces/{namespace}/placements", "POST /cluster.open-cluster-management.io/v1beta1/namespaces/{namespace}/placements", "{ \"apiVersion\" : \"cluster.open-cluster-management.io/v1beta1\", \"kind\" : \"Placement\", \"metadata\" : { \"name\" : \"placement1\", \"namespace\": \"ns1\" }, \"spec\": { \"predicates\": [ { \"requiredClusterSelector\": { \"labelSelector\": { \"matchLabels\": { \"vendor\": \"OpenShift\" } } } } ] }, \"status\" : { } }", "GET /cluster.open-cluster-management.io/v1beta1/namespaces/{namespace}/placements/{placement_name}", "DELETE /cluster.open-cluster-management.io/v1beta1/namespaces/{namespace}/placements/{placement_name}", "GET /cluster.open-cluster-management.io/v1beta1/namespaces/{namespace}/placementdecisions", "POST /cluster.open-cluster-management.io/v1beta1/namespaces/{namespace}/placementdecisions", "{ \"apiVersion\" : \"cluster.open-cluster-management.io/v1beta1\", \"kind\" : \"PlacementDecision\", \"metadata\" : { \"labels\" : { \"cluster.open-cluster-management.io/placement\" : \"placement1\" }, \"name\" : \"placement1-decision1\", \"namespace\": \"ns1\" }, \"status\" : { } }", "GET /cluster.open-cluster-management.io/v1beta1/namespaces/{namespace}/placementdecisions/{placementdecision_name}", "DELETE /cluster.open-cluster-management.io/v1beta1/namespaces/{namespace}/placementdecisions/{placementdecision_name}", "GET /config.open-cluster-management.io/v1alpha1/klusterletconfigs", "POST /config.open-cluster-management.io/v1alpha1/klusterletconfigs", "{ \"apiVersion\": \"apiextensions.k8s.io/v1\", \"kind\": \"CustomResourceDefinition\", \"metadata\": { \"annotations\": { \"controller-gen.kubebuilder.io/version\": \"v0.7.0\" }, \"creationTimestamp\": null, \"name\": \"klusterletconfigs.config.open-cluster-management.io\" }, \"spec\": { \"group\": \"config.open-cluster-management.io\", \"names\": { \"kind\": \"KlusterletConfig\", \"listKind\": \"KlusterletConfigList\", \"plural\": \"klusterletconfigs\", \"singular\": \"klusterletconfig\" }, \"preserveUnknownFields\": false, \"scope\": \"Cluster\", \"versions\": [ { \"name\": \"v1alpha1\", \"schema\": { \"openAPIV3Schema\": { \"description\": \"KlusterletConfig contains the configuration of a klusterlet including the upgrade strategy, config overrides, proxy configurations etc.\", \"properties\": { \"apiVersion\": { \"description\": \"APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\", \"type\": \"string\" }, \"kind\": { \"description\": \"Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\", \"type\": \"string\" }, \"metadata\": { \"type\": \"object\" }, \"spec\": { \"description\": \"Spec defines the desired state of KlusterletConfig\", \"properties\": { \"appliedManifestWorkEvictionGracePeriod\": { \"description\": \"AppliedManifestWorkEvictionGracePeriod is the eviction grace period the work agent will wait before evicting the AppliedManifestWorks, whose corresponding ManifestWorks are missing on the hub cluster, from the managed cluster. If not present, the default value of the work agent will be used. If its value is set to \\\"INFINITE\\\", it means the AppliedManifestWorks will never been evicted from the managed cluster.\", \"pattern\": \"^([0-9]+(s|m|h))+USD|^INFINITEUSD\", \"type\": \"string\" }, \"bootstrapKubeConfigs\": { \"description\": \"BootstrapKubeConfigSecrets is the list of secrets that reflects the Klusterlet.Spec.RegistrationConfiguration.BootstrapKubeConfigs.\", \"properties\": { \"localSecretsConfig\": { \"description\": \"LocalSecretsConfig include a list of secrets that contains the kubeconfigs for ordered bootstrap kubeconifigs. The secrets must be in the same namespace where the agent controller runs.\", \"properties\": { \"hubConnectionTimeoutSeconds\": { \"default\": 600, \"description\": \"HubConnectionTimeoutSeconds is used to set the timeout of connecting to the hub cluster. When agent loses the connection to the hub over the timeout seconds, the agent do a rebootstrap. By default is 10 mins.\", \"format\": \"int32\", \"minimum\": 180, \"type\": \"integer\" }, \"kubeConfigSecrets\": { \"description\": \"KubeConfigSecrets is a list of secret names. The secrets are in the same namespace where the agent controller runs.\", \"items\": { \"properties\": { \"name\": { \"description\": \"Name is the name of the secret.\", \"type\": \"string\" } }, \"type\": \"object\" }, \"type\": \"array\" } }, \"type\": \"object\" }, \"type\": { \"default\": \"None\", \"description\": \"Type specifies the type of priority bootstrap kubeconfigs. By default, it is set to None, representing no priority bootstrap kubeconfigs are set.\", \"enum\": [ \"None\", \"LocalSecrets\" ], \"type\": \"string\" } }, \"type\": \"object\" }, \"hubKubeAPIServerCABundle\": { \"description\": \"HubKubeAPIServerCABundle is the CA bundle to verify the server certificate of the hub kube API against. If not present, CA bundle will be determined with the logic below: 1). Use the certificate of the named certificate configured in APIServer/cluster if FQDN matches; 2). Otherwise use the CA certificates from kube-root-ca.crt ConfigMap in the cluster namespace; \\n Deprecated and maintained for backward compatibility, use HubKubeAPIServerConfig.ServerVarificationStrategy and HubKubeAPIServerConfig.TrustedCABundles instead\", \"format\": \"byte\", \"type\": \"string\" }, \"hubKubeAPIServerConfig\": { \"description\": \"HubKubeAPIServerConfig specifies the settings required for connecting to the hub Kube API server. If this field is present, the below deprecated fields will be ignored: - HubKubeAPIServerProxyConfig - HubKubeAPIServerURL - HubKubeAPIServerCABundle\", \"properties\": { \"proxyURL\": { \"description\": \"ProxyURL is the URL to the proxy to be used for all requests made by client If an HTTPS proxy server is configured, you may also need to add the necessary CA certificates to TrustedCABundles.\", \"type\": \"string\" }, \"serverVerificationStrategy\": { \"description\": \"ServerVerificationStrategy is the strategy used for verifying the server certification; The value could be \\\"UseSystemTruststore\\\", \\\"UseAutoDetectedCABundle\\\", \\\"UseCustomCABundles\\\", empty. \\n When this strategy is not set or value is empty; if there is only one klusterletConfig configured for a cluster, the strategy is eaual to \\\"UseAutoDetectedCABundle\\\", if there are more than one klusterletConfigs, the empty strategy will be overrided by other non-empty strategies.\", \"enum\": [ \"UseSystemTruststore\", \"UseAutoDetectedCABundle\", \"UseCustomCABundles\" ], \"type\": \"string\" }, \"trustedCABundles\": { \"description\": \"TrustedCABundles refers to a collection of user-provided CA bundles used for verifying the server certificate of the hub Kubernetes API If the ServerVerificationStrategy is set to \\\"UseSystemTruststore\\\", this field will be ignored. Otherwise, the CA certificates from the configured bundles will be appended to the klusterlet CA bundle.\", \"items\": { \"description\": \"CABundle is a user-provided CA bundle\", \"properties\": { \"caBundle\": { \"description\": \"CABundle refers to a ConfigMap with label \\\"import.open-cluster-management.io/ca-bundle\\\" containing the user-provided CA bundle The key of the CA data could be \\\"ca-bundle.crt\\\", \\\"ca.crt\\\", or \\\"tls.crt\\\".\", \"properties\": { \"name\": { \"description\": \"name is the metadata.name of the referenced config map\", \"type\": \"string\" }, \"namespace\": { \"description\": \"name is the metadata.namespace of the referenced config map\", \"type\": \"string\" } }, \"required\": [ \"name\", \"namespace\" ], \"type\": \"object\" }, \"name\": { \"description\": \"Name is the identifier used to reference the CA bundle; Do not use \\\"auto-detected\\\" as the name since it is the reserved name for the auto-detected CA bundle.\", \"type\": \"string\" } }, \"required\": [ \"caBundle\", \"name\" ], \"type\": \"object\" }, \"type\": \"array\", \"x-kubernetes-list-map-keys\": [ \"name\" ], \"x-kubernetes-list-type\": \"map\" }, \"url\": { \"description\": \"URL is the endpoint of the hub Kube API server. If not present, the .status.apiServerURL of Infrastructure/cluster will be used as the default value. e.g. `oc get infrastructure cluster -o jsonpath='{.status.apiServerURL}'`\", \"type\": \"string\" } }, \"type\": \"object\" }, \"hubKubeAPIServerProxyConfig\": { \"description\": \"HubKubeAPIServerProxyConfig holds proxy settings for connections between klusterlet/add-on agents on the managed cluster and the kube-apiserver on the hub cluster. Empty means no proxy settings is available. \\n Deprecated and maintained for backward compatibility, use HubKubeAPIServerConfig.ProxyURL instead\", \"properties\": { \"caBundle\": { \"description\": \"CABundle is a CA certificate bundle to verify the proxy server. It will be ignored if only HTTPProxy is set; And it is required when HTTPSProxy is set and self signed CA certificate is used by the proxy server.\", \"format\": \"byte\", \"type\": \"string\" }, \"httpProxy\": { \"description\": \"HTTPProxy is the URL of the proxy for HTTP requests\", \"type\": \"string\" }, \"httpsProxy\": { \"description\": \"HTTPSProxy is the URL of the proxy for HTTPS requests HTTPSProxy will be chosen if both HTTPProxy and HTTPSProxy are set.\", \"type\": \"string\" } }, \"type\": \"object\" }, \"hubKubeAPIServerURL\": { \"description\": \"HubKubeAPIServerURL is the URL of the hub Kube API server. If not present, the .status.apiServerURL of Infrastructure/cluster will be used as the default value. e.g. `oc get infrastructure cluster -o jsonpath='{.status.apiServerURL}'` \\n Deprecated and maintained for backward compatibility, use HubKubeAPIServerConfig.URL instead\", \"type\": \"string\" }, \"installMode\": { \"description\": \"InstallMode is the mode to install the klusterlet\", \"properties\": { \"noOperator\": { \"description\": \"NoOperator is the setting of klusterlet installation when install type is noOperator.\", \"properties\": { \"postfix\": { \"description\": \"Postfix is the postfix of the klusterlet name. The name of the klusterlet is \\\"klusterlet\\\" if it is not set, and \\\"klusterlet-{Postfix}\\\". The install namespace is \\\"open-cluster-management-agent\\\" if it is not set, and \\\"open-cluster-management-{Postfix}\\\".\", \"maxLength\": 33, \"pattern\": \"^[-a-z0-9]*[a-z0-9]USD\", \"type\": \"string\" } }, \"type\": \"object\" }, \"type\": { \"default\": \"default\", \"description\": \"InstallModeType is the type of install mode.\", \"enum\": [ \"default\", \"noOperator\" ], \"type\": \"string\" } }, \"type\": \"object\" }, \"nodePlacement\": { \"description\": \"NodePlacement enables explicit control over the scheduling of the agent components. If the placement is nil, the placement is not specified, it will be omitted. If the placement is an empty object, the placement will match all nodes and tolerate nothing.\", \"properties\": { \"nodeSelector\": { \"additionalProperties\": { \"type\": \"string\" }, \"description\": \"NodeSelector defines which Nodes the Pods are scheduled on. The default is an empty list.\", \"type\": \"object\" }, \"tolerations\": { \"description\": \"Tolerations are attached by pods to tolerate any taint that matches the triple <key,value,effect> using the matching operator <operator>. The default is an empty list.\", \"items\": { \"description\": \"The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>.\", \"properties\": { \"effect\": { \"description\": \"Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute.\", \"type\": \"string\" }, \"key\": { \"description\": \"Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys.\", \"type\": \"string\" }, \"operator\": { \"description\": \"Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category.\", \"type\": \"string\" }, \"tolerationSeconds\": { \"description\": \"TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system.\", \"format\": \"int64\", \"type\": \"integer\" }, \"value\": { \"description\": \"Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string.\", \"type\": \"string\" } }, \"type\": \"object\" }, \"type\": \"array\" } }, \"type\": \"object\" }, \"pullSecret\": { \"description\": \"PullSecret is the name of image pull secret.\", \"properties\": { \"apiVersion\": { \"description\": \"API version of the referent.\", \"type\": \"string\" }, \"fieldPath\": { \"description\": \"If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: \\\"spec.containers{name}\\\" (where \\\"name\\\" refers to the name of the container that triggered the event) or if no container name is specified \\\"spec.containers[2]\\\" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future.\", \"type\": \"string\" }, \"kind\": { \"description\": \"Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\", \"type\": \"string\" }, \"name\": { \"description\": \"Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names\", \"type\": \"string\" }, \"namespace\": { \"description\": \"Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/\", \"type\": \"string\" }, \"resourceVersion\": { \"description\": \"Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\", \"type\": \"string\" }, \"uid\": { \"description\": \"UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids\", \"type\": \"string\" } }, \"type\": \"object\", \"x-kubernetes-map-type\": \"atomic\" }, \"registries\": { \"description\": \"Registries includes the mirror and source registries. The source registry will be replaced by the Mirror.\", \"items\": { \"properties\": { \"mirror\": { \"description\": \"Mirror is the mirrored registry of the Source. Will be ignored if Mirror is empty.\", \"type\": \"string\" }, \"source\": { \"description\": \"Source is the source registry. All image registries will be replaced by Mirror if Source is empty.\", \"type\": \"string\" } }, \"required\": [ \"mirror\" ], \"type\": \"object\" }, \"type\": \"array\" } }, \"type\": \"object\" }, \"status\": { \"description\": \"Status defines the observed state of KlusterletConfig\", \"type\": \"object\" } }, \"type\": \"object\" } }, \"served\": true, \"storage\": true, \"subresources\": { \"status\": {} } } ] }, \"status\": { \"acceptedNames\": { \"kind\": \"\", \"plural\": \"\" }, \"conditions\": [], \"storedVersions\": [] } }", "GET /config.open-cluster-management.io/v1alpha1/klusterletconfigs/{klusterletconfig_name}", "DELETE /config.open-cluster-management.io/v1alpha1/klusterletconfigs/{klusterletconfig_name}", "<your-directory>/cluster-scoped-resources/gather-managed.log>", "adm must-gather --image=registry.redhat.io/multicluster-engine/must-gather-rhel9:v2.7 --dest-dir=<directory>", "REGISTRY=registry.example.com:5000 IMAGE=USDREGISTRY/multicluster-engine/must-gather-rhel9@sha256:ff9f37eb400dc1f7d07a9b6f2da9064992934b69847d17f59e385783c071b9d8> adm must-gather --image=USDIMAGE --dest-dir=./data", "This host is pending user action. Host timed out when pulling ignition. Check the host console... Rebooting", "info: networking config is defined in the real root info: will not attempt to propagate initramfs networking", "\"bmac.agent-install.openshift.io/installer-args\": \"[\\\"--append-karg\\\", \\\"coreos.force_persist_ip\\\"]\"", "2024-02-22T09:56:19-05:00 ERROR HostedCluster deletion failed {\"namespace\": \"clusters\", \"name\": \"hosted-0\", \"error\": \"context deadline exceeded\"} 2024-02-22T09:56:19-05:00 ERROR Failed to destroy cluster {\"error\": \"context deadline exceeded\"}", "get machine -n <hosted_cluster_namespace>", "NAMESPACE NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION clusters-hosted-0 hosted-0-9gg8b hosted-0-nhdbp Deleting 10h 4.14.0-rc.8", "edit machines -n <hosted_cluster_namespace>", "get agentmachine -n <hosted_cluster_namespace>", "hcp destroy cluster agent --name <cluster_name>", "reason: Unschedulable message: '0/6 nodes are available: 3 Insufficient cpu, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.'", "#!/bin/bash MCE_NAMESPACE=<namespace> delete multiclusterengine --all delete apiservice v1.admission.cluster.open-cluster-management.io v1.admission.work.open-cluster-management.io delete crd discoveredclusters.discovery.open-cluster-management.io discoveryconfigs.discovery.open-cluster-management.io delete mutatingwebhookconfiguration ocm-mutating-webhook managedclustermutators.admission.cluster.open-cluster-management.io delete validatingwebhookconfiguration ocm-validating-webhook delete ns USDMCE_NAMESPACE", "-n multicluster-engine get pods -l app=managedcluster-import-controller-v2", "-n multicluster-engine logs -l app=managedcluster-import-controller-v2 --tail=-1", "-n <managed_cluster_name> get secrets <managed_cluster_name>-import", "-n multicluster-engine logs -l app=managedcluster-import-controller-v2 --tail=-1 | grep importconfig-controller", "get managedcluster <managed_cluster_name> -o=jsonpath='{range .status.conditions[*]}{.type}{\"\\t\"}{.status}{\"\\t\"}{.message}{\"\\n\"}{end}' | grep ManagedClusterImportSucceeded", "-n multicluster-engine logs -l app=managedcluster-import-controller-v2 -f", "cluster_name=<your-managed-cluster-name>", "kubeconfig_secret_name=USD(oc -n USD{cluster_name} get clusterdeployments USD{cluster_name} -ojsonpath='{.spec.clusterMetadata.adminKubeconfigSecretRef.name}')", "-n USD{cluster_name} get secret USD{kubeconfig_secret_name} -ojsonpath={.data.kubeconfig} | base64 -d > kubeconfig.old", "export KUBECONFIG=kubeconfig.old", "get ns", "cluster_name=<managed_cluster_name> kubeconfig_file=<path_to_kubeconfig>", "kubeconfig=USD(cat USD{kubeconfig_file} | base64 -w0)", "kubeconfig=USD(cat USD{kubeconfig_file} | base64)", "kubeconfig_patch=\"[\\{\\\"op\\\":\\\"replace\\\", \\\"path\\\":\\\"/data/kubeconfig\\\", \\\"value\\\":\\\"USD{kubeconfig}\\\"}, \\{\\\"op\\\":\\\"replace\\\", \\\"path\\\":\\\"/data/raw-kubeconfig\\\", \\\"value\\\":\\\"USD{kubeconfig}\\\"}]\"", "kubeconfig_secret_name=USD(oc -n USD{cluster_name} get clusterdeployments USD{cluster_name} -ojsonpath='{.spec.clusterMetadata.adminKubeconfigSecretRef.name}')", "-n USD{cluster_name} patch secrets USD{kubeconfig_secret_name} --type='json' -p=\"USD{kubeconfig_patch}\"", "get pod -n open-cluster-management-agent | grep klusterlet-registration-agent", "logs <registration_agent_pod> -n open-cluster-management-agent", "get infrastructure cluster -o yaml | grep apiServerURL", "E0917 03:04:05.874759 1 manifestwork_controller.go:179] Reconcile work test-1-klusterlet-addon-workmgr fails with err: Failed to update work status with err Get \"https://api.aaa-ocp.dev02.location.com:6443/apis/cluster.management.io/v1/namespaces/test-1/manifestworks/test-1-klusterlet-addon-workmgr\": x509: certificate signed by unknown authority E0917 03:04:05.874887 1 base_controller.go:231] \"ManifestWorkAgent\" controller failed to sync \"test-1-klusterlet-addon-workmgr\", err: Failed to update work status with err Get \"api.aaa-ocp.dev02.location.com:6443/apis/cluster.management.io/v1/namespaces/test-1/manifestworks/test-1-klusterlet-addon-workmgr\": x509: certificate signed by unknown authority E0917 03:04:37.245859 1 reflector.go:127] k8s.io/[email protected]/tools/cache/reflector.go:156: Failed to watch *v1.ManifestWork: failed to list *v1.ManifestWork: Get \"api.aaa-ocp.dev02.location.com:6443/apis/cluster.management.io/v1/namespaces/test-1/manifestworks?resourceVersion=607424\": x509: certificate signed by unknown authority", "I0917 02:27:41.525026 1 event.go:282] Event(v1.ObjectReference{Kind:\"Namespace\", Namespace:\"open-cluster-management-agent\", Name:\"open-cluster-management-agent\", UID:\"\", APIVersion:\"v1\", ResourceVersion:\"\", FieldPath:\"\"}): type: 'Normal' reason: 'ManagedClusterAvailableConditionUpdated' update managed cluster \"test-1\" available condition to \"True\", due to \"Managed cluster is available\" E0917 02:58:26.315984 1 reflector.go:127] k8s.io/[email protected]/tools/cache/reflector.go:156: Failed to watch *v1beta1.CertificateSigningRequest: Get \"https://api.aaa-ocp.dev02.location.com:6443/apis/cluster.management.io/v1/managedclusters?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dtest-1&resourceVersion=607408&timeout=9m33s&timeoutSeconds=573&watch=true\"\": x509: certificate signed by unknown authority E0917 02:58:26.598343 1 reflector.go:127] k8s.io/[email protected]/tools/cache/reflector.go:156: Failed to watch *v1.ManagedCluster: Get \"https://api.aaa-ocp.dev02.location.com:6443/apis/cluster.management.io/v1/managedclusters?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dtest-1&resourceVersion=607408&timeout=9m33s&timeoutSeconds=573&watch=true\": x509: certificate signed by unknown authority E0917 02:58:27.613963 1 reflector.go:127] k8s.io/[email protected]/tools/cache/reflector.go:156: Failed to watch *v1.ManagedCluster: failed to list *v1.ManagedCluster: Get \"https://api.aaa-ocp.dev02.location.com:6443/apis/cluster.management.io/v1/managedclusters?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dtest-1&resourceVersion=607408&timeout=9m33s&timeoutSeconds=573&watch=true\"\": x509: certificate signed by unknown authority", "delete secret -n <cluster_name> <cluster_name>-import", "delete secret -n <cluster_name> <cluster_name>-import", "get secret -n <cluster_name> <cluster_name>-import -ojsonpath='{.data.import\\.yaml}' | base64 --decode > import.yaml", "apply -f import.yaml", "edit managedcluster <cluster-name>", "time=\"2020-08-07T15:27:55Z\" level=error msg=\"Error: error setting up new vSphere SOAP client: Post https://147.1.1.1/sdk: x509: cannot validate certificate for xx.xx.xx.xx because it doesn't contain any IP SANs\" time=\"2020-08-07T15:27:55Z\" level=error", "Error: error setting up new vSphere SOAP client: Post https://vspherehost.com/sdk: x509: certificate signed by unknown authority\"", "x509: certificate has expired or is not yet valid", "time=\"2020-08-07T19:41:58Z\" level=debug msg=\"vsphere_tag_category.category: Creating...\" time=\"2020-08-07T19:41:58Z\" level=error time=\"2020-08-07T19:41:58Z\" level=error msg=\"Error: could not create category: POST https://vspherehost.com/rest/com/vmware/cis/tagging/category: 403 Forbidden\" time=\"2020-08-07T19:41:58Z\" level=error time=\"2020-08-07T19:41:58Z\" level=error msg=\" on ../tmp/openshift-install-436877649/main.tf line 54, in resource \\\"vsphere_tag_category\\\" \\\"category\\\":\" time=\"2020-08-07T19:41:58Z\" level=error msg=\" 54: resource \\\"vsphere_tag_category\\\" \\\"category\\\" {\"", "failed to fetch Master Machines: failed to load asset \\\\\\\"Install Config\\\\\\\": invalid \\\\\\\"install-config.yaml\\\\\\\" file: platform.vsphere.dnsVIP: Invalid value: \\\\\\\"\\\\\\\": \\\\\\\"\\\\\\\" is not a valid IP", "time=\"2020-08-11T14:31:38-04:00\" level=debug msg=\"vsphereprivate_import_ova.import: Creating...\" time=\"2020-08-11T14:31:39-04:00\" level=error time=\"2020-08-11T14:31:39-04:00\" level=error msg=\"Error: rpc error: code = Unavailable desc = transport is closing\" time=\"2020-08-11T14:31:39-04:00\" level=error time=\"2020-08-11T14:31:39-04:00\" level=error time=\"2020-08-11T14:31:39-04:00\" level=fatal msg=\"failed to fetch Cluster: failed to generate asset \\\"Cluster\\\": failed to create cluster: failed to apply Terraform: failed to complete the change\"", "ERROR ERROR Error: error reconfiguring virtual machine: error processing disk changes post-clone: disk.0: ServerFaultCode: NoPermission: RESOURCE (vm-71:2000), ACTION (queryAssociatedProfile): RESOURCE (vm-71), ACTION (PolicyIDByVirtualDisk)", "get pod -n <new_cluster_name>", "logs <new_cluster_name_provision_pod_name> -n <new_cluster_name> -c hive", "describe clusterdeployments -n <new_cluster_name>", "No subnets provided for zones", "get klusterlets klusterlet -oyaml", "api-resources --verbs=list --namespaced -o name | grep -E '^secrets|^serviceaccounts|^managedclusteraddons|^roles|^rolebindings|^manifestworks|^leases|^managedclusterinfo|^appliedmanifestworks'|^clusteroauths' | xargs -n 1 oc get --show-kind --ignore-not-found -n <cluster_name>", "edit <resource_kind> <resource_name> -n <namespace>", "delete ns <cluster-name>", "delete secret auto-import-secret -n <cluster-namespace>", "describe placement <placement-name>", "Name: demo-placement Namespace: default Labels: <none> Annotations: <none> API Version: cluster.open-cluster-management.io/v1beta1 Kind: Placement Status: Conditions: Last Transition Time: 2022-09-30T07:39:45Z Message: Placement configurations check pass Reason: Succeedconfigured Status: False Type: PlacementMisconfigured Last Transition Time: 2022-09-30T07:39:45Z Message: No valid ManagedClusterSetBindings found in placement namespace Reason: NoManagedClusterSetBindings Status: False Type: PlacementSatisfied Number Of Selected Clusters: 0", "Name: demo-placement Namespace: default Labels: <none> Annotations: <none> API Version: cluster.open-cluster-management.io/v1beta1 Kind: Placement Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal DecisionCreate 2m10s placementController Decision demo-placement-decision-1 is created with placement demo-placement in namespace default Normal DecisionUpdate 2m10s placementController Decision demo-placement-decision-1 is updated with placement demo-placement in namespace default Normal ScoreUpdate 2m10s placementController cluster1:0 cluster2:100 cluster3:200 Normal DecisionUpdate 3s placementController Decision demo-placement-decision-1 is updated with placement demo-placement in namespace default Normal ScoreUpdate 3s placementController cluster1:200 cluster2:145 cluster3:189 cluster4:200", "ProvisioningError 51s metal3-baremetal-controller Image provisioning failed: Deploy step deploy.deploy failed with BadRequestError: HTTP POST https://<bmc_address>/redfish/v1/Managers/iDRAC.Embedded.1/VirtualMedia/CD/Actions/VirtualMedia.InsertMedia returned code 400. Base.1.8.GeneralError: A general error has occurred. See ExtendedInfo for more information Extended information: [ {\"Message\": \"Unable to mount remote share https://<ironic_address>/redfish/boot-<uuid>.iso.\", 'MessageArgs': [\"https://<ironic_address>/redfish/boot-<uuid>.iso\"], \"[email protected]\": 1, \"MessageId\": \"IDRAC.2.5.RAC0720\", \"RelatedProperties\": [\"#/Image\"], \"[email protected]\": 1, \"Resolution\": \"Retry the operation.\", \"Severity\": \"Informational\"} ]", "get imagecontentsourcepolicy -o json | jq -r '.items[].spec.repositoryDigestMirrors[0].mirrors[0]'", "get clusterversion version -ojsonpath='{.status.desired.image}'", "image extract --file /release-manifests/0000_50_installer_coreos-bootimages.yaml <payload_image> --confirm", "cat 0000_50_installer_coreos-bootimages.yaml | yq -r .data.stream | jq -r '.architectures.x86_64.images.kubevirt.\"digest-ref\"'", "image mirror <rhcos_image> <internal_registry>", "apiVersion: config.openshift.io/v1 kind: ImageDigestMirrorSet metadata: name: rhcos-boot-kubevirt spec: repositoryDigestMirrors: - mirrors: - <rhcos_image_no_digest> 1 source: virthost.ostest.test.metalkube.org:5000/localimages/ocp-v4.0-art-dev 2", "apply -f rhcos-boot-kubevirt.yaml", "E0809 18:45:29.450874 1 reflector.go:147] k8s.io/[email protected]/tools/cache/reflector.go:229: Failed to watch *v1.CertificateSigningRequest: failed to list *v1.CertificateSigningRequest: Get \"https://api.xxx.openshiftapps.com:443/apis/certificates.k8s.io/v1/certificatesigningrequests?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate signed by unknown authority", "apiVersion: config.open-cluster-management.io/v1alpha1 kind: KlusterletConfig metadata: name: global spec: hubKubeAPIServerConfig: serverVerificationStrategy: UseSystemTruststore", "apply -f <filename>", "get secret <cluster_name>-import -n <cluster_name> -o jsonpath={.data.import\\.yaml} | base64 --decode > <cluster_name>-import.yaml", "apply -f <cluster_name>-import.yaml", "get clusterversion version -o jsonpath='{.status.availableUpdates[*].version}'", "-n <cluster_name> get managedclusterinfo <cluster_name> -o jsonpath='{.status.distributionInfo.ocp.availableUpdates[*]}'", "-n <cluster_name> get ClusterCurator <cluster_name> -o yaml", "-n <cluster_name> delete ClusterCurator <cluster_name>", "-n open-cluster-management-agent-addon logs klusterlet-addon-workmgr-<your_pod_name>" ]
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.12/html/clusters/cluster_mce_overview
Chapter 1. About model serving
Chapter 1. About model serving When you serve a model, you upload a trained model into Red Hat OpenShift AI for querying, which allows you to integrate your trained models into intelligent applications. You can upload a model to an S3-compatible object storage, persistent volume claim, or Open Container Initiative (OCI) image. You can then access and train the model from your project workbench. After training the model, you can serve or deploy the model using a model-serving platform. Serving or deploying the model makes the model available as a service, or model runtime server, that you can access using an API. You can then access the inference endpoints for the deployed model from the dashboard and see predictions based on data inputs that you provide through API calls. Querying the model through the API is also called model inferencing. You can serve models on one of the following model-serving platforms: Single-model serving platform Multi-model serving platform NVIDIA NIM model serving platform The model-serving platform that you choose depends on your business needs: If you want to deploy each model on its own runtime server, or want to use a serverless deployment, select the single-model serving platform . The single-model serving platform is recommended for production use. If you want to deploy multiple models with only one runtime server, select the multi-model serving platform . This option is best if you are deploying more than 1,000 small and medium models and want to reduce resource consumption. If you want to use NVIDIA Inference Microservices (NIMs) to deploy a model, select the NVIDIA NIM-model serving platform . 1.1. Single-model serving platform You can deploy each model from a dedicated model serving on the single-model serving platform. Deploying models from a dedicated model server can help you deploy, monitor, scale, and maintain models that require increased resources. This model serving platform is deal for serving large models. The single-model serving platform is based on the KServe component. The single-model serving platform is helpful for use cases such as: Large language models (LLMs) Generative AI For more information about setting up the single-model serving platform, see Installing the single-model serving platform . 1.2. Multi-model serving platform You can deploy multiple models from the same model server on the multi-model serving platform. Each of the deployed models shares the server resources. Deploying multiple models from the same model server can be advantageous on OpenShift clusters that have finite compute resources or pods. This model serving platform is ideal for serving small and medium models in large quantities. The multi-model serving platform is based on the ModelMesh component. For more information about setting up the multi-model serving platform, see Installing the multi-model serving platform . 1.3. NVIDIA NIM model serving platform You can deploy models using NVIDIA Inference Microservices (NIM) on the NVIDIA NIM model serving platform. NVIDIA NIM, part of NVIDIA AI Enterprise, is a set of microservices designed for secure, reliable deployment of high performance AI model inferencing across clouds, data centers and workstations. NVIDIA NIM inference services are helpful for use cases such as: Using GPU-accelerated containers inferencing models optimized by NVIDIA Deploying generative AI for virtual screening, content generation, and avatar creation The NVIDIA NIM model serving platform is based on the single-model serving platform. To use the NVIDIA NIM model serving platform, you must first install the single-model serving platform. For more information, see Installing the single-model serving platform .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html/serving_models/about-model-serving_about-model-serving
Chapter 14. Web Servers
Chapter 14. Web Servers A web server is a network service that serves content to a client over the web. This typically means web pages, but any other documents can be served as well. Web servers are also known as HTTP servers, as they use the hypertext transport protocol ( HTTP ). The web servers available in Red Hat Enterprise Linux 7 are: Apache HTTP Server nginx Important Note that the nginx web server is available only as a Software Collection for Red Hat Enterprise Linux 7. See the Red Hat Software Collections Release Notes for information regarding getting access to nginx, usage of Software Collections, and other. 14.1. The Apache HTTP Server This section focuses on the Apache HTTP Server 2.4 , httpd , an open source web server developed by the Apache Software Foundation . If you are upgrading from a release of Red Hat Enterprise Linux, you will need to update the httpd service configuration accordingly. This section reviews some of the newly added features, outlines important changes between Apache HTTP Server 2.4 and version 2.2, and guides you through the update of older configuration files. 14.1.1. Notable Changes The Apache HTTP Server in Red Hat Enterprise Linux 7 has the following changes compared to Red Hat Enterprise Linux 6: httpd Service Control With the migration away from SysV init scripts, server administrators should switch to using the apachectl and systemctl commands to control the service, in place of the service command. The following examples are specific to the httpd service. The command: is replaced by The systemd unit file for httpd has different behavior from the init script as follows: A graceful restart is used by default when the service is reloaded. A graceful stop is used by default when the service is stopped. The command: is replaced by Private /tmp To enhance system security, the systemd unit file runs the httpd daemon using a private /tmp directory, separate to the system /tmp directory. Configuration Layout Configuration files which load modules are now placed in the /etc/httpd/conf.modules.d/ directory. Packages that provide additional loadable modules for httpd , such as php , will place a file in this directory. An Include directive before the main section of the /etc/httpd/conf/httpd.conf file is used to include files within the /etc/httpd/conf.modules.d/ directory. This means any configuration files within conf.modules.d/ are processed before the main body of httpd.conf . An IncludeOptional directive for files within the /etc/httpd/conf.d/ directory is placed at the end of the httpd.conf file. This means the files within /etc/httpd/conf.d/ are now processed after the main body of httpd.conf . Some additional configuration files are provided by the httpd package itself: /etc/httpd/conf.d/autoindex.conf - This configures mod_autoindex directory indexing. /etc/httpd/conf.d/userdir.conf - This configures access to user directories, for example http://example.com/~username/ ; such access is disabled by default for security reasons. /etc/httpd/conf.d/welcome.conf - As in releases, this configures the welcome page displayed for http://localhost/ when no content is present. Default Configuration A minimal httpd.conf file is now provided by default. Many common configuration settings, such as Timeout or KeepAlive are no longer explicitly configured in the default configuration; hard-coded settings will be used instead, by default. The hard-coded default settings for all configuration directives are specified in the manual. See the section called "Installable Documentation" for more information. Incompatible Syntax Changes If migrating an existing configuration from httpd 2.2 to httpd 2.4 , a number of backwards-incompatible changes to the httpd configuration syntax were made which will require changes. See the following Apache document for more information on upgrading http://httpd.apache.org/docs/2.4/upgrading.html Processing Model In releases of Red Hat Enterprise Linux, different multi-processing models ( MPM ) were made available as different httpd binaries: the forked model, "prefork", as /usr/sbin/httpd , and the thread-based model "worker" as /usr/sbin/httpd.worker . In Red Hat Enterprise Linux 7, only a single httpd binary is used, and three MPMs are available as loadable modules: worker, prefork (default), and event. Edit the configuration file /etc/httpd/conf.modules.d/00-mpm.conf as required, by adding and removing the comment character # so that only one of the three MPM modules is loaded. Packaging Changes The LDAP authentication and authorization modules are now provided in a separate sub-package, mod_ldap . The new module mod_session and associated helper modules are provided in a new sub-package, mod_session . The new modules mod_proxy_html and mod_xml2enc are provided in a new sub-package, mod_proxy_html . These packages are all in the Optional channel. Note Before subscribing to the Optional and Supplementary channels see the Scope of Coverage Details . If you decide to install packages from these channels, follow the steps documented in the article called How to access Optional and Supplementary channels, and -devel packages using Red Hat Subscription Manager (RHSM)? on the Red Hat Customer Portal. Packaging Filesystem Layout The /var/cache/mod_proxy/ directory is no longer provided; instead, the /var/cache/httpd/ directory is packaged with a proxy and ssl subdirectory. Packaged content provided with httpd has been moved from /var/www/ to /usr/share/httpd/ : /usr/share/httpd/icons/ - The directory containing a set of icons used with directory indices, previously contained in /var/www/icons/ , has moved to /usr/share/httpd/icons/ . Available at http://localhost/icons/ in the default configuration; the location and the availability of the icons is configurable in the /etc/httpd/conf.d/autoindex.conf file. /usr/share/httpd/manual/ - The /var/www/manual/ has moved to /usr/share/httpd/manual/ . This directory, contained in the httpd-manual package, contains the HTML version of the manual for httpd . Available at http://localhost/manual/ if the package is installed, the location and the availability of the manual is configurable in the /etc/httpd/conf.d/manual.conf file. /usr/share/httpd/error/ - The /var/www/error/ has moved to /usr/share/httpd/error/ . Custom multi-language HTTP error pages. Not configured by default, the example configuration file is provided at /usr/share/doc/httpd- VERSION /httpd-multilang-errordoc.conf . Authentication, Authorization and Access Control The configuration directives used to control authentication, authorization and access control have changed significantly. Existing configuration files using the Order , Deny and Allow directives should be adapted to use the new Require syntax. See the following Apache document for more information http://httpd.apache.org/docs/2.4/howto/auth.html suexec To improve system security, the suexec binary is no longer installed as if by the root user; instead, it has file system capability bits set which allow a more restrictive set of permissions. In conjunction with this change, the suexec binary no longer uses the /var/log/httpd/suexec.log logfile. Instead, log messages are sent to syslog ; by default these will appear in the /var/log/secure log file. Module Interface Third-party binary modules built against httpd 2.2 are not compatible with httpd 2.4 due to changes to the httpd module interface. Such modules will need to be adjusted as necessary for the httpd 2.4 module interface, and then rebuilt. A detailed list of the API changes in version 2.4 is available here: http://httpd.apache.org/docs/2.4/developer/new_api_2_4.html . The apxs binary used to build modules from source has moved from /usr/sbin/apxs to /usr/bin/apxs . Removed modules List of httpd modules removed in Red Hat Enterprise Linux 7: mod_auth_mysql, mod_auth_pgsql httpd 2.4 provides SQL database authentication support internally in the mod_authn_dbd module. mod_perl mod_perl is not officially supported with httpd 2.4 by upstream. mod_authz_ldap httpd 2.4 provides LDAP support in sub-package mod_ldap using mod_authnz_ldap . 14.1.2. Updating the Configuration To update the configuration files from the Apache HTTP Server version 2.2, take the following steps: Make sure all module names are correct, since they may have changed. Adjust the LoadModule directive for each module that has been renamed. Recompile all third party modules before attempting to load them. This typically means authentication and authorization modules. If you use the mod_userdir module, make sure the UserDir directive indicating a directory name (typically public_html ) is provided. If you use the Apache HTTP Secure Server, see Section 14.1.8, "Enabling the mod_ssl Module" for important information on enabling the Secure Sockets Layer (SSL) protocol. Note that you can check the configuration for possible errors by using the following command: For more information on upgrading the Apache HTTP Server configuration from version 2.2 to 2.4, see http://httpd.apache.org/docs/2.4/upgrading.html . 14.1.3. Running the httpd Service This section describes how to start, stop, restart, and check the current status of the Apache HTTP Server. To be able to use the httpd service, make sure you have the httpd installed. You can do so by using the following command: For more information on the concept of targets and how to manage system services in Red Hat Enterprise Linux in general, see Chapter 10, Managing Services with systemd . 14.1.3.1. Starting the Service To run the httpd service, type the following at a shell prompt as root : If you want the service to start automatically at boot time, use the following command: Note If running the Apache HTTP Server as a secure server, a password may be required after the machine boots if using an encrypted private SSL key. 14.1.3.2. Stopping the Service To stop the running httpd service, type the following at a shell prompt as root : To prevent the service from starting automatically at boot time, type: 14.1.3.3. Restarting the Service There are three different ways to restart a running httpd service: To restart the service completely, enter the following command as root : This stops the running httpd service and immediately starts it again. Use this command after installing or removing a dynamically loaded module such as PHP. To only reload the configuration, as root , type: This causes the running httpd service to reload its configuration file. Any requests currently being processed will be interrupted, which may cause a client browser to display an error message or render a partial page. To reload the configuration without affecting active requests, enter the following command as root : This causes the running httpd service to reload its configuration file. Any requests currently being processed will continue to use the old configuration. For more information on how to manage system services in Red Hat Enterprise Linux 7, see Chapter 10, Managing Services with systemd . 14.1.3.4. Verifying the Service Status To verify that the httpd service is running, type the following at a shell prompt: 14.1.4. Editing the Configuration Files When the httpd service is started, by default, it reads the configuration from locations that are listed in Table 14.1, "The httpd service configuration files" . Table 14.1. The httpd service configuration files Path Description /etc/httpd/conf/httpd.conf The main configuration file. /etc/httpd/conf.d/ An auxiliary directory for configuration files that are included in the main configuration file. Although the default configuration should be suitable for most situations, it is a good idea to become at least familiar with some of the more important configuration options. Note that for any changes to take effect, the web server has to be restarted first. See Section 14.1.3.3, "Restarting the Service" for more information on how to restart the httpd service. To check the configuration for possible errors, type the following at a shell prompt: To make the recovery from mistakes easier, it is recommended that you make a copy of the original file before editing it. 14.1.5. Working with Modules Being a modular application, the httpd service is distributed along with a number of Dynamic Shared Objects ( DSO s), which can be dynamically loaded or unloaded at runtime as necessary. On Red Hat Enterprise Linux 7, these modules are located in /usr/lib64/httpd/modules/ . 14.1.5.1. Loading a Module To load a particular DSO module, use the LoadModule directive. Note that modules provided by a separate package often have their own configuration file in the /etc/httpd/conf.d/ directory. Example 14.1. Loading the mod_ssl DSO Once you are finished, restart the web server to reload the configuration. See Section 14.1.3.3, "Restarting the Service" for more information on how to restart the httpd service. 14.1.5.2. Writing a Module If you intend to create a new DSO module, make sure you have the httpd-devel package installed. To do so, enter the following command as root : This package contains the include files, the header files, and the APache eXtenSion ( apxs ) utility required to compile a module. Once written, you can build the module with the following command: If the build was successful, you should be able to load the module the same way as any other module that is distributed with the Apache HTTP Server. 14.1.6. Setting Up Virtual Hosts The Apache HTTP Server's built in virtual hosting allows the server to provide different information based on which IP address, host name, or port is being requested. To create a name-based virtual host, copy the example configuration file /usr/share/doc/httpd- VERSION /httpd-vhosts.conf into the /etc/httpd/conf.d/ directory, and replace the @@Port@@ and @@ServerRoot@@ placeholder values. Customize the options according to your requirements as shown in Example 14.2, "Example virtual host configuration" . Example 14.2. Example virtual host configuration Note that ServerName must be a valid DNS name assigned to the machine. The <VirtualHost> container is highly customizable, and accepts most of the directives available within the main server configuration. Directives that are not supported within this container include User and Group , which were replaced by SuexecUserGroup . Note If you configure a virtual host to listen on a non-default port, make sure you update the Listen directive in the global settings section of the /etc/httpd/conf/httpd.conf file accordingly. To activate a newly created virtual host, the web server has to be restarted first. See Section 14.1.3.3, "Restarting the Service" for more information on how to restart the httpd service. 14.1.7. Setting Up an SSL Server Secure Sockets Layer ( SSL ) is a cryptographic protocol that allows a server and a client to communicate securely. Along with its extended and improved version called Transport Layer Security ( TLS ), it ensures both privacy and data integrity. The Apache HTTP Server in combination with mod_ssl , a module that uses the OpenSSL toolkit to provide the SSL/TLS support, is commonly referred to as the SSL server . Red Hat Enterprise Linux also supports the use of Mozilla NSS as the TLS implementation. Support for Mozilla NSS is provided by the mod_nss module. Unlike an HTTP connection that can be read and possibly modified by anybody who is able to intercept it, the use of SSL/TLS over HTTP, referred to as HTTPS, prevents any inspection or modification of the transmitted content. This section provides basic information on how to enable this module in the Apache HTTP Server configuration, and guides you through the process of generating private keys and self-signed certificates. 14.1.7.1. An Overview of Certificates and Security Secure communication is based on the use of keys. In conventional or symmetric cryptography , both ends of the transaction have the same key they can use to decode each other's transmissions. On the other hand, in public or asymmetric cryptography , two keys co-exist: a private key that is kept a secret, and a public key that is usually shared with the public. While the data encoded with the public key can only be decoded with the private key, data encoded with the private key can in turn only be decoded with the public key. To provide secure communications using SSL, an SSL server must use a digital certificate signed by a Certificate Authority ( CA ). The certificate lists various attributes of the server (that is, the server host name, the name of the company, its location, etc.), and the signature produced using the CA's private key. This signature ensures that a particular certificate authority has signed the certificate, and that the certificate has not been modified in any way. When a web browser establishes a new SSL connection, it checks the certificate provided by the web server. If the certificate does not have a signature from a trusted CA, or if the host name listed in the certificate does not match the host name used to establish the connection, it refuses to communicate with the server and usually presents a user with an appropriate error message. By default, most web browsers are configured to trust a set of widely used certificate authorities. Because of this, an appropriate CA should be chosen when setting up a secure server, so that target users can trust the connection, otherwise they will be presented with an error message, and will have to accept the certificate manually. Since encouraging users to override certificate errors can allow an attacker to intercept the connection, you should use a trusted CA whenever possible. For more information on this, see Table 14.2, "Information about CA lists used by common web browsers" . Table 14.2. Information about CA lists used by common web browsers Web Browser Link Mozilla Firefox Mozilla root CA list . Opera Information on root certificates used by Opera . Internet Explorer Information on root certificates used by Microsoft Windows . Chromium Information on root certificates used by the Chromium project . When setting up an SSL server, you need to generate a certificate request and a private key, and then send the certificate request, proof of the company's identity, and payment to a certificate authority. Once the CA verifies the certificate request and your identity, it will send you a signed certificate you can use with your server. Alternatively, you can create a self-signed certificate that does not contain a CA signature, and thus should be used for testing purposes only. 14.1.8. Enabling the mod_ssl Module If you intend to set up an SSL or HTTPS server using mod_ssl , you cannot have the another application or module, such as mod_nss configured to use the same port. Port 443 is the default port for HTTPS. To set up an SSL server using the mod_ssl module and the OpenSSL toolkit, install the mod_ssl and openssl packages. Enter the following command as root : This will create the mod_ssl configuration file at /etc/httpd/conf.d/ssl.conf , which is included in the main Apache HTTP Server configuration file by default. For the module to be loaded, restart the httpd service as described in Section 14.1.3.3, "Restarting the Service" . Important Due to the vulnerability described in POODLE: SSLv3 vulnerability (CVE-2014-3566) , Red Hat recommends disabling SSL and using only TLSv1.1 or TLSv1.2 . Backwards compatibility can be achieved using TLSv1.0 . Many products Red Hat supports have the ability to use SSLv2 or SSLv3 protocols, or enable them by default. However, the use of SSLv2 or SSLv3 is now strongly recommended against. 14.1.8.1. Enabling and Disabling SSL and TLS in mod_ssl To disable and enable specific versions of the SSL and TLS protocol, either do it globally by adding the SSLProtocol directive in the " # SSL Global Context" section of the configuration file and removing it everywhere else, or edit the default entry under " SSL Protocol support" in all "VirtualHost" sections. If you do not specify it in the per-domain VirtualHost section then it will inherit the settings from the global section. To make sure that a protocol version is being disabled the administrator should either only specify SSLProtocol in the "SSL Global Context" section, or specify it in all per-domain VirtualHost sections. Disable SSLv2 and SSLv3 To disable SSL version 2 and SSL version 3, which implies enabling everything except SSL version 2 and SSL version 3, in all VirtualHost sections, proceed as follows: As root , open the /etc/httpd/conf.d/ssl.conf file and search for all instances of the SSLProtocol directive. By default, the configuration file contains one section that looks as follows: This section is within the VirtualHost section. Edit the SSLProtocol line as follows: Repeat this action for all VirtualHost sections. Save and close the file. Verify that all occurrences of the SSLProtocol directive have been changed as follows: This step is particularly important if you have more than the one default VirtualHost section. Restart the Apache daemon as follows: Note that any sessions will be interrupted. Disable All SSL and TLS Protocols Except TLS 1 and Up To disable all SSL and TLS protocol versions except TLS version 1 and higher, proceed as follows: As root , open the /etc/httpd/conf.d/ssl.conf file and search for all instances of SSLProtocol directive. By default the file contains one section that looks as follows: Edit the SSLProtocol line as follows: Save and close the file. Verify the change as follows: Restart the Apache daemon as follows: Note that any sessions will be interrupted. Testing the Status of SSL and TLS Protocols To check which versions of SSL and TLS are enabled or disabled, make use of the openssl s_client -connect command. The command has the following form: Where port is the port to test and protocol is the protocol version to test for. To test the SSL server running locally, use localhost as the host name. For example, to test the default port for secure HTTPS connections, port 443 to see if SSLv3 is enabled, issue a command as follows: The above output indicates that the handshake failed and therefore no cipher was negotiated. The above output indicates that no failure of the handshake occurred and a set of ciphers was negotiated. The openssl s_client command options are documented in the s_client(1) manual page. For more information on the SSLv3 vulnerability and how to test for it, see the Red Hat Knowledgebase article POODLE: SSLv3 vulnerability (CVE-2014-3566) . 14.1.9. Enabling the mod_nss Module If you intend to set up an HTTPS server using mod_nss , you cannot have the mod_ssl package installed with its default settings as mod_ssl will use port 443 by default, however this is the default HTTPS port. If at all possible, remove the package. To remove mod_ssl , enter the following command as root : Note If mod_ssl is required for other purposes, modify the /etc/httpd/conf.d/ssl.conf file to use a port other than 443 to prevent mod_ssl conflicting with mod_nss when its port to listen on is changed to 443 . Only one module can own a port, therefore mod_nss and mod_ssl can only co-exist at the same time if they use unique ports. For this reason mod_nss by default uses 8443 , but the default port for HTTPS is port 443 . The port is specified by the Listen directive as well as in the VirtualHost name or address. Everything in NSS is associated with a "token". The software token exists in the NSS database but you can also have a physical token containing certificates. With OpenSSL, discrete certificates and private keys are held in PEM files. With NSS, these are stored in a database. Each certificate and key is associated with a token and each token can have a password protecting it. This password is optional, but if a password is used then the Apache HTTP server needs a copy of it in order to open the database without user intervention at system start. Configuring mod_nss Install mod_nss as root : This will create the mod_nss configuration file at /etc/httpd/conf.d/nss.conf . The /etc/httpd/conf.d/ directory is included in the main Apache HTTP Server configuration file by default. For the module to be loaded, restart the httpd service as described in Section 14.1.3.3, "Restarting the Service" . As root , open the /etc/httpd/conf.d/nss.conf file and search for all instances of the Listen directive. Edit the Listen 8443 line as follows: Port 443 is the default port for HTTPS . Edit the default VirtualHost default :8443 line as follows: Edit any other non-default virtual host sections if they exist. Save and close the file. Mozilla NSS stores certificates in a server certificate database indicated by the NSSCertificateDatabase directive in the /etc/httpd/conf.d/nss.conf file. By default the path is set to /etc/httpd/alias , the NSS database created during installation. To view the default NSS database, issue a command as follows: In the above command output, Server-Cert is the default NSSNickname . The -L option lists all the certificates, or displays information about a named certificate, in a certificate database. The -d option specifies the database directory containing the certificate and key database files. See the certutil(1) man page for more command line options. To configure mod_nss to use another database, edit the NSSCertificateDatabase line in the /etc/httpd/conf.d/nss.conf file. The default file has the following lines within the VirtualHost section. In the above command output, alias is the default NSS database directory, /etc/httpd/alias/ . To apply a password to the default NSS certificate database, use the following command as root : Before deploying the HTTPS server, create a new certificate database using a certificate signed by a certificate authority (CA). Example 14.3. Adding a Certificate to the Mozilla NSS database The certutil command is used to add a CA certificate to the NSS database files: The above command adds a CA certificate stored in a PEM-formatted file named certificate.pem . The -d option specifies the NSS database directory containing the certificate and key database files, the -n option sets a name for the certificate, -t CT,, means that the certificate is trusted to be used in TLS clients and servers. The -A option adds an existing certificate to a certificate database. If the database does not exist it will be created. The -a option allows the use of ASCII format for input or output, and the -i option passes the certificate.pem input file to the command. See the certutil(1) man page for more command line options. The NSS database should be password protected to safeguard the private key. Example 14.4. Setting a Password for a Mozilla NSS database The certutil tool can be used set a password for an NSS database as follows: For example, for the default database, issue a command as root as follows: Configure mod_nss to use the NSS internal software token by changing the line with the NSSPassPhraseDialog directive as follows: This is to avoid manual password entry on system start. The software token exists in the NSS database but you can also have a physical token containing your certificates. If the SSL Server Certificate contained in the NSS database is an RSA certificate, make certain that the NSSNickname parameter is uncommented and matches the nickname displayed in step 4 above: If the SSL Server Certificate contained in the NSS database is an ECC certificate, make certain that the NSSECCNickname parameter is uncommented and matches the nickname displayed in step 4 above: Make certain that the NSSCertificateDatabase parameter is uncommented and points to the NSS database directory displayed in step 4 or configured in step 5 above: Replace /etc/httpd/alias with the path to the certificate database to be used. Create the /etc/httpd/password.conf file as root : Add a line with the following form: Replacing password with the password that was applied to the NSS security databases in step 6 above. Apply the appropriate ownership and permissions to the /etc/httpd/password.conf file: To configure mod_nss to use the NSS the software token in /etc/httpd/password.conf , edit /etc/httpd/conf.d/nss.conf as follows: Restart the Apache server for the changes to take effect as described in Section 14.1.3.3, "Restarting the Service" Important Due to the vulnerability described in POODLE: SSLv3 vulnerability (CVE-2014-3566) , Red Hat recommends disabling SSL and using only TLSv1.1 or TLSv1.2 . Backwards compatibility can be achieved using TLSv1.0 . Many products Red Hat supports have the ability to use SSLv2 or SSLv3 protocols, or enable them by default. However, the use of SSLv2 or SSLv3 is now strongly recommended against. 14.1.9.1. Enabling and Disabling SSL and TLS in mod_nss To disable and enable specific versions of the SSL and TLS protocol, either do it globally by adding the NSSProtocol directive in the " # SSL Global Context" section of the configuration file and removing it everywhere else, or edit the default entry under " SSL Protocol" in all "VirtualHost" sections. If you do not specify it in the per-domain VirtualHost section then it will inherit the settings from the global section. To make sure that a protocol version is being disabled the administrator should either only specify NSSProtocol in the "SSL Global Context" section, or specify it in all per-domain VirtualHost sections. Disable All SSL and TLS Protocols Except TLS 1 and Up in mod_nss To disable all SSL and TLS protocol versions except TLS version 1 and higher, proceed as follows: As root , open the /etc/httpd/conf.d/nss.conf file and search for all instances of the NSSProtocol directive. By default, the configuration file contains one section that looks as follows: This section is within the VirtualHost section. Edit the NSSProtocol line as follows: Repeat this action for all VirtualHost sections. Edit the Listen 8443 line as follows: Edit the default VirtualHost default :8443 line as follows: Edit any other non-default virtual host sections if they exist. Save and close the file. Verify that all occurrences of the NSSProtocol directive have been changed as follows: This step is particularly important if you have more than one VirtualHost section. Restart the Apache daemon as follows: Note that any sessions will be interrupted. Testing the Status of SSL and TLS Protocols in mod_nss To check which versions of SSL and TLS are enabled or disabled in mod_nss , make use of the openssl s_client -connect command. Install the openssl package as root : The openssl s_client -connect command has the following form: Where port is the port to test and protocol is the protocol version to test for. To test the SSL server running locally, use localhost as the host name. For example, to test the default port for secure HTTPS connections, port 443 to see if SSLv3 is enabled, issue a command as follows: The above output indicates that the handshake failed and therefore no cipher was negotiated. The above output indicates that no failure of the handshake occurred and a set of ciphers was negotiated. The openssl s_client command options are documented in the s_client(1) manual page. For more information on the SSLv3 vulnerability and how to test for it, see the Red Hat Knowledgebase article POODLE: SSLv3 vulnerability (CVE-2014-3566) . 14.1.10. Using an Existing Key and Certificate If you have a previously created key and certificate, you can configure the SSL server to use these files instead of generating new ones. There are only two situations where this is not possible: You are changing the IP address or domain name. Certificates are issued for a particular IP address and domain name pair. If one of these values changes, the certificate becomes invalid. You have a certificate from VeriSign, and you are changing the server software. VeriSign, a widely used certificate authority, issues certificates for a particular software product, IP address, and domain name. Changing the software product renders the certificate invalid. In either of the above cases, you will need to obtain a new certificate. For more information on this topic, see Section 14.1.11, "Generating a New Key and Certificate" . If you want to use an existing key and certificate, move the relevant files to the /etc/pki/tls/private/ and /etc/pki/tls/certs/ directories respectively. You can do so by issuing the following commands as root : Then add the following lines to the /etc/httpd/conf.d/ssl.conf configuration file: To load the updated configuration, restart the httpd service as described in Section 14.1.3.3, "Restarting the Service" . Example 14.5. Using a key and certificate from the Red Hat Secure Web Server 14.1.11. Generating a New Key and Certificate In order to generate a new key and certificate pair, the crypto-utils package must be installed on the system. To install it, enter the following command as root : This package provides a set of tools to generate and manage SSL certificates and private keys, and includes genkey , the Red Hat Keypair Generation utility that will guide you through the key generation process. Important If the server already has a valid certificate and you are replacing it with a new one, specify a different serial number. This ensures that client browsers are notified of this change, update to this new certificate as expected, and do not fail to access the page. To create a new certificate with a custom serial number, as root , use the following command instead of genkey : Note If there already is a key file for a particular host name in your system, genkey will refuse to start. In this case, remove the existing file using the following command as root : To run the utility enter the genkey command as root , followed by the appropriate host name (for example, penguin.example.com ): To complete the key and certificate creation, take the following steps: Review the target locations in which the key and certificate will be stored. Figure 14.1. Running the genkey utility Use the Tab key to select the button, and press Enter to proceed to the screen. Using the up and down arrow keys, select a suitable key size. Note that while a larger key increases the security, it also increases the response time of your server. The NIST recommends using 2048 bits . See NIST Special Publication 800-131A . Figure 14.2. Selecting the key size Once finished, use the Tab key to select the button, and press Enter to initiate the random bits generation process. Depending on the selected key size, this may take some time. Decide whether you want to send a certificate request to a certificate authority. Figure 14.3. Generating a certificate request Use the Tab key to select Yes to compose a certificate request, or No to generate a self-signed certificate. Then press Enter to confirm your choice. Using the Spacebar key, enable ( [*] ) or disable ( [ ] ) the encryption of the private key. Figure 14.4. Encrypting the private key Use the Tab key to select the button, and press Enter to proceed to the screen. If you have enabled the private key encryption, enter an adequate passphrase. Note that for security reasons, it is not displayed as you type, and it must be at least five characters long. Figure 14.5. Entering a passphrase Use the Tab key to select the button, and press Enter to proceed to the screen. Important Entering the correct passphrase is required in order for the server to start. If you lose it, you will need to generate a new key and certificate. Customize the certificate details. Figure 14.6. Specifying certificate information Use the Tab key to select the button, and press Enter to finish the key generation. If you have previously enabled the certificate request generation, you will be prompted to send it to a certificate authority. Figure 14.7. Instructions on how to send a certificate request Press Enter to return to a shell prompt. Once generated, add the key and certificate locations to the /etc/httpd/conf.d/ssl.conf configuration file: Finally, restart the httpd service as described in Section 14.1.3.3, "Restarting the Service" , so that the updated configuration is loaded. 14.1.12. Configure the Firewall for HTTP and HTTPS Using the Command Line Red Hat Enterprise Linux does not allow HTTP and HTTPS traffic by default. To enable the system to act as a web server, make use of firewalld 's supported services to enable HTTP and HTTPS traffic to pass through the firewall as required. To enable HTTP using the command line, issue the following command as root : To enable HTTPS using the command line, issue the following command as root : Note that these changes will not persist after the system start. To make permanent changes to the firewall, repeat the commands adding the --permanent option. 14.1.12.1. Checking Network Access for Incoming HTTPS and HTTPS Using the Command Line To check what services the firewall is configured to allow, using the command line, issue the following command as root : In this example taken from a default installation, the firewall is enabled but HTTP and HTTPS have not been allowed to pass through. Once the HTTP and HTTP firewall services are enabled, the services line will appear similar to the following: For more information on enabling firewall services, or opening and closing ports with firewalld , see the Red Hat Enterprise Linux 7 Security Guide . 14.1.13. Additional Resources To learn more about the Apache HTTP Server, see the following resources. Installed Documentation httpd(8) - The manual page for the httpd service containing the complete list of its command-line options. genkey(1) - The manual page for genkey utility, provided by the crypto-utils package. apachectl(8) - The manual page for the Apache HTTP Server Control Interface. Installable Documentation http://localhost/manual/ - The official documentation for the Apache HTTP Server with the full description of its directives and available modules. Note that in order to access this documentation, you must have the httpd-manual package installed, and the web server must be running. Before accessing the documentation, issue the following commands as root : Online Documentation http://httpd.apache.org/ - The official website for the Apache HTTP Server with documentation on all the directives and default modules. http://www.openssl.org/ - The OpenSSL home page containing further documentation, frequently asked questions, links to the mailing lists, and other useful resources.
[ "service httpd graceful", "apachectl graceful", "service httpd configtest", "apachectl configtest", "~]# apachectl configtest Syntax OK", "~]# yum install httpd", "~]# systemctl start httpd.service", "~]# systemctl enable httpd.service Created symlink from /etc/systemd/system/multi-user.target.wants/httpd.service to /usr/lib/systemd/system/httpd.service.", "~]# systemctl stop httpd.service", "~]# systemctl disable httpd.service Removed symlink /etc/systemd/system/multi-user.target.wants/httpd.service.", "~]# systemctl restart httpd.service", "~]# systemctl reload httpd.service", "~]# apachectl graceful", "~]# systemctl is-active httpd.service active", "~]# apachectl configtest Syntax OK", "LoadModule ssl_module modules/mod_ssl.so", "~]# yum install httpd-devel", "~]# apxs -i -a -c module_name.c", "<VirtualHost *:80> ServerAdmin [email protected] DocumentRoot \"/www/docs/penguin.example.com\" ServerName penguin.example.com ServerAlias www.penguin.example.com ErrorLog \"/var/log/httpd/dummy-host.example.com-error_log\" CustomLog \"/var/log/httpd/dummy-host.example.com-access_log\" common </VirtualHost>", "~]# yum install mod_ssl openssl", "~]# vi /etc/httpd/conf.d/ssl.conf SSL Protocol support: List the enable protocol levels with which clients will be able to connect. Disable SSLv2 access by default: SSLProtocol all -SSLv2", "SSL Protocol support: List the enable protocol levels with which clients will be able to connect. Disable SSLv2 access by default: SSLProtocol all -SSLv2 -SSLv3", "~]# grep SSLProtocol /etc/httpd/conf.d/ssl.conf SSLProtocol all -SSLv2 -SSLv3", "~]# systemctl restart httpd", "~]# vi /etc/httpd/conf.d/ssl.conf SSL Protocol support: List the enable protocol levels with which clients will be able to connect. Disable SSLv2 access by default: SSLProtocol all -SSLv2", "SSL Protocol support: List the enable protocol levels with which clients will be able to connect. Disable SSLv2 access by default: SSLProtocol -all +TLSv1 +TLSv1.1 +TLSv1.2", "~]# grep SSLProtocol /etc/httpd/conf.d/ssl.conf SSLProtocol -all +TLSv1 +TLSv1.1 +TLSv1.2", "~]# systemctl restart httpd", "openssl s_client -connect hostname : port - protocol", "~]# openssl s_client -connect localhost:443 -ssl3 CONNECTED(00000003) 139809943877536:error:14094410:SSL routines:SSL3_READ_BYTES: sslv3 alert handshake failure :s3_pkt.c:1257:SSL alert number 40 139809943877536:error:1409E0E5:SSL routines:SSL3_WRITE_BYTES: ssl handshake failure :s3_pkt.c:596: output omitted New, (NONE), Cipher is (NONE) Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE SSL-Session: Protocol : SSLv3 output truncated", "~]USD openssl s_client -connect localhost:443 -tls1_2 CONNECTED(00000003) depth=0 C = --, ST = SomeState, L = SomeCity, O = SomeOrganization, OU = SomeOrganizationalUnit, CN = localhost.localdomain, emailAddress = [email protected] output omitted New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-GCM-SHA384 Server public key is 2048 bit Secure Renegotiation IS supported Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1.2 output truncated", "~]# yum remove mod_ssl", "~]# yum install mod_nss", "Listen 443", "VirtualHost default :443", "~]# certutil -L -d /etc/httpd/alias Certificate Nickname Trust Attributes SSL,S/MIME,JAR/XPI cacert CTu,Cu,Cu Server-Cert u,u,u alpha u,pu,u", "Server Certificate Database: The NSS security database directory that holds the certificates and keys. The database consists of 3 files: cert8.db, key3.db and secmod.db. Provide the directory that these files exist. NSSCertificateDatabase /etc/httpd/alias", "~]# certutil -W -d /etc/httpd/alias Enter Password or Pin for \"NSS Certificate DB\": Enter a password which will be used to encrypt your keys. The password should be at least 8 characters long, and should contain at least one non-alphabetic character. Enter new password: Re-enter password: Password changed successfully.", "certutil -d /etc/httpd/nss-db-directory/ -A -n \" CA_certificate \" -t CT,, -a -i certificate.pem", "certutil -W -d /etc/httpd/ nss-db-directory /", "~]# certutil -W -d /etc/httpd/alias Enter Password or Pin for \"NSS Certificate DB\": Enter a password which will be used to encrypt your keys. The password should be at least 8 characters long, and should contain at least one non-alphabetic character. Enter new password: Re-enter password: Password changed successfully.", "~]# vi /etc/httpd/conf.d/nss.conf NSSPassPhraseDialog file:/etc/httpd/password.conf", "~]# vi /etc/httpd/conf.d/nss.conf NSSNickname Server-Cert", "~]# vi /etc/httpd/conf.d/nss.conf NSSECCNickname Server-Cert", "~]# vi /etc/httpd/conf.d/nss.conf NSSCertificateDatabase /etc/httpd/alias", "~]# vi /etc/httpd/password.conf", "internal: password", "~]# chgrp apache /etc/httpd/password.conf ~]# chmod 640 /etc/httpd/password.conf ~]# ls -l /etc/httpd/password.conf -rw-r-----. 1 root apache 10 Dec 4 17:13 /etc/httpd/password.conf", "~]# vi /etc/httpd/conf.d/nss.conf", "~]# vi /etc/httpd/conf.d/nss.conf SSL Protocol: output omitted Since all protocol ranges are completely inclusive, and no protocol in the middle of a range may be excluded, the entry \"NSSProtocol SSLv3,TLSv1.1\" is identical to the entry \"NSSProtocol SSLv3,TLSv1.0,TLSv1.1\". NSSProtocol SSLv3,TLSv1.0,TLSv1.1", "SSL Protocol: NSSProtocol TLSv1.0,TLSv1.1", "Listen 443", "VirtualHost default :443", "~]# grep NSSProtocol /etc/httpd/conf.d/nss.conf middle of a range may be excluded, the entry \" NSSProtocol SSLv3,TLSv1.1\" is identical to the entry \" NSSProtocol SSLv3,TLSv1.0,TLSv1.1\". NSSProtocol TLSv1.0,TLSv1.1", "~]# service httpd restart", "~]# yum install openssl", "openssl s_client -connect hostname : port - protocol", "~]# openssl s_client -connect localhost:443 -ssl3 CONNECTED(00000003) 3077773036:error:1408F10B:SSL routines:SSL3_GET_RECORD:wrong version number:s3_pkt.c:337: output omitted New, (NONE), Cipher is (NONE) Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE SSL-Session: Protocol : SSLv3 output truncated", "~]USD openssl s_client -connect localhost:443 -tls1 CONNECTED(00000003) depth=1 C = US, O = example.com, CN = Certificate Shack output omitted New, TLSv1/SSLv3, Cipher is AES128-SHA Server public key is 1024 bit Secure Renegotiation IS supported Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 output truncated", "~]# mv key_file.key /etc/pki/tls/private/hostname.key ~]# mv certificate.crt /etc/pki/tls/certs/hostname.crt", "SSLCertificateFile /etc/pki/tls/certs/ hostname .crt SSLCertificateKeyFile /etc/pki/tls/private/ hostname .key", "~]# mv /etc/httpd/conf/httpsd.key /etc/pki/tls/private/penguin.example.com.key ~]# mv /etc/httpd/conf/httpsd.crt /etc/pki/tls/certs/penguin.example.com.crt", "~]# yum install crypto-utils", "~]# openssl req -x509 -new -set_serial number -key hostname.key -out hostname.crt", "~]# rm /etc/pki/tls/private/hostname.key", "~]# genkey hostname", "SSLCertificateFile /etc/pki/tls/certs/ hostname .crt SSLCertificateKeyFile /etc/pki/tls/private/ hostname .key", "~]# firewall-cmd --add-service http success", "~]# firewall-cmd --add-service https success", "~]# firewall-cmd --list-all public (default, active) interfaces: em1 sources: services: dhcpv6-client ssh output truncated", "services: dhcpv6-client http https ssh", "~] yum install httpd-manual ~] apachectl graceful" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/system_administrators_guide/ch-web_servers
Chapter 6. ClusterVersion [config.openshift.io/v1]
Chapter 6. ClusterVersion [config.openshift.io/v1] Description ClusterVersion is the configuration for the ClusterVersionOperator. This is where parameters related to automatic updates can be set. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 6.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec is the desired state of the cluster version - the operator will work to ensure that the desired version is applied to the cluster. status object status contains information about the available updates and any in-progress updates. 6.1.1. .spec Description spec is the desired state of the cluster version - the operator will work to ensure that the desired version is applied to the cluster. Type object Required clusterID Property Type Description capabilities object capabilities configures the installation of optional, core cluster components. A null value here is identical to an empty object; see the child properties for default semantics. channel string channel is an identifier for explicitly requesting that a non-default set of updates be applied to this cluster. The default channel will be contain stable updates that are appropriate for production clusters. clusterID string clusterID uniquely identifies this cluster. This is expected to be an RFC4122 UUID value (xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx in hexadecimal values). This is a required field. desiredUpdate object desiredUpdate is an optional field that indicates the desired value of the cluster version. Setting this value will trigger an upgrade (if the current version does not match the desired version). The set of recommended update values is listed as part of available updates in status, and setting values outside that range may cause the upgrade to fail. Some of the fields are inter-related with restrictions and meanings described here. 1. image is specified, version is specified, architecture is specified. API validation error. 2. image is specified, version is specified, architecture is not specified. You should not do this. version is silently ignored and image is used. 3. image is specified, version is not specified, architecture is specified. API validation error. 4. image is specified, version is not specified, architecture is not specified. image is used. 5. image is not specified, version is specified, architecture is specified. version and desired architecture are used to select an image. 6. image is not specified, version is specified, architecture is not specified. version and current architecture are used to select an image. 7. image is not specified, version is not specified, architecture is specified. API validation error. 8. image is not specified, version is not specified, architecture is not specified. API validation error. If an upgrade fails the operator will halt and report status about the failing component. Setting the desired update value back to the version will cause a rollback to be attempted. Not all rollbacks will succeed. overrides array overrides is list of overides for components that are managed by cluster version operator. Marking a component unmanaged will prevent the operator from creating or updating the object. overrides[] object ComponentOverride allows overriding cluster version operator's behavior for a component. upstream string upstream may be used to specify the preferred update server. By default it will use the appropriate update server for the cluster and region. 6.1.2. .spec.capabilities Description capabilities configures the installation of optional, core cluster components. A null value here is identical to an empty object; see the child properties for default semantics. Type object Property Type Description additionalEnabledCapabilities array (string) additionalEnabledCapabilities extends the set of managed capabilities beyond the baseline defined in baselineCapabilitySet. The default is an empty set. baselineCapabilitySet string baselineCapabilitySet selects an initial set of optional capabilities to enable, which can be extended via additionalEnabledCapabilities. If unset, the cluster will choose a default, and the default may change over time. The current default is vCurrent. 6.1.3. .spec.desiredUpdate Description desiredUpdate is an optional field that indicates the desired value of the cluster version. Setting this value will trigger an upgrade (if the current version does not match the desired version). The set of recommended update values is listed as part of available updates in status, and setting values outside that range may cause the upgrade to fail. Some of the fields are inter-related with restrictions and meanings described here. 1. image is specified, version is specified, architecture is specified. API validation error. 2. image is specified, version is specified, architecture is not specified. You should not do this. version is silently ignored and image is used. 3. image is specified, version is not specified, architecture is specified. API validation error. 4. image is specified, version is not specified, architecture is not specified. image is used. 5. image is not specified, version is specified, architecture is specified. version and desired architecture are used to select an image. 6. image is not specified, version is specified, architecture is not specified. version and current architecture are used to select an image. 7. image is not specified, version is not specified, architecture is specified. API validation error. 8. image is not specified, version is not specified, architecture is not specified. API validation error. If an upgrade fails the operator will halt and report status about the failing component. Setting the desired update value back to the version will cause a rollback to be attempted. Not all rollbacks will succeed. Type object Property Type Description architecture string architecture is an optional field that indicates the desired value of the cluster architecture. In this context cluster architecture means either a single architecture or a multi architecture. architecture can only be set to Multi thereby only allowing updates from single to multi architecture. If architecture is set, image cannot be set and version must be set. Valid values are 'Multi' and empty. force boolean force allows an administrator to update to an image that has failed verification or upgradeable checks. This option should only be used when the authenticity of the provided image has been verified out of band because the provided image will run with full administrative access to the cluster. Do not use this flag with images that comes from unknown or potentially malicious sources. image string image is a container image location that contains the update. image should be used when the desired version does not exist in availableUpdates or history. When image is set, version is ignored. When image is set, version should be empty. When image is set, architecture cannot be specified. version string version is a semantic version identifying the update version. version is ignored if image is specified and required if architecture is specified. 6.1.4. .spec.overrides Description overrides is list of overides for components that are managed by cluster version operator. Marking a component unmanaged will prevent the operator from creating or updating the object. Type array 6.1.5. .spec.overrides[] Description ComponentOverride allows overriding cluster version operator's behavior for a component. Type object Required group kind name namespace unmanaged Property Type Description group string group identifies the API group that the kind is in. kind string kind indentifies which object to override. name string name is the component's name. namespace string namespace is the component's namespace. If the resource is cluster scoped, the namespace should be empty. unmanaged boolean unmanaged controls if cluster version operator should stop managing the resources in this cluster. Default: false 6.1.6. .status Description status contains information about the available updates and any in-progress updates. Type object Required desired observedGeneration versionHash Property Type Description availableUpdates `` availableUpdates contains updates recommended for this cluster. Updates which appear in conditionalUpdates but not in availableUpdates may expose this cluster to known issues. This list may be empty if no updates are recommended, if the update service is unavailable, or if an invalid channel has been specified. capabilities object capabilities describes the state of optional, core cluster components. conditionalUpdates array conditionalUpdates contains the list of updates that may be recommended for this cluster if it meets specific required conditions. Consumers interested in the set of updates that are actually recommended for this cluster should use availableUpdates. This list may be empty if no updates are recommended, if the update service is unavailable, or if an empty or invalid channel has been specified. conditionalUpdates[] object ConditionalUpdate represents an update which is recommended to some clusters on the version the current cluster is reconciling, but which may not be recommended for the current cluster. conditions array conditions provides information about the cluster version. The condition "Available" is set to true if the desiredUpdate has been reached. The condition "Progressing" is set to true if an update is being applied. The condition "Degraded" is set to true if an update is currently blocked by a temporary or permanent error. Conditions are only valid for the current desiredUpdate when metadata.generation is equal to status.generation. conditions[] object ClusterOperatorStatusCondition represents the state of the operator's managed and monitored components. desired object desired is the version that the cluster is reconciling towards. If the cluster is not yet fully initialized desired will be set with the information available, which may be an image or a tag. history array history contains a list of the most recent versions applied to the cluster. This value may be empty during cluster startup, and then will be updated when a new update is being applied. The newest update is first in the list and it is ordered by recency. Updates in the history have state Completed if the rollout completed - if an update was failing or halfway applied the state will be Partial. Only a limited amount of update history is preserved. history[] object UpdateHistory is a single attempted update to the cluster. observedGeneration integer observedGeneration reports which version of the spec is being synced. If this value is not equal to metadata.generation, then the desired and conditions fields may represent a version. versionHash string versionHash is a fingerprint of the content that the cluster will be updated with. It is used by the operator to avoid unnecessary work and is for internal use only. 6.1.7. .status.capabilities Description capabilities describes the state of optional, core cluster components. Type object Property Type Description enabledCapabilities array (string) enabledCapabilities lists all the capabilities that are currently managed. knownCapabilities array (string) knownCapabilities lists all the capabilities known to the current cluster. 6.1.8. .status.conditionalUpdates Description conditionalUpdates contains the list of updates that may be recommended for this cluster if it meets specific required conditions. Consumers interested in the set of updates that are actually recommended for this cluster should use availableUpdates. This list may be empty if no updates are recommended, if the update service is unavailable, or if an empty or invalid channel has been specified. Type array 6.1.9. .status.conditionalUpdates[] Description ConditionalUpdate represents an update which is recommended to some clusters on the version the current cluster is reconciling, but which may not be recommended for the current cluster. Type object Required release risks Property Type Description conditions array conditions represents the observations of the conditional update's current status. Known types are: * Evaluating, for whether the cluster-version operator will attempt to evaluate any risks[].matchingRules. * Recommended, for whether the update is recommended for the current cluster. conditions[] object Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } release object release is the target of the update. risks array risks represents the range of issues associated with updating to the target release. The cluster-version operator will evaluate all entries, and only recommend the update if there is at least one entry and all entries recommend the update. risks[] object ConditionalUpdateRisk represents a reason and cluster-state for not recommending a conditional update. 6.1.10. .status.conditionalUpdates[].conditions Description conditions represents the observations of the conditional update's current status. Known types are: * Evaluating, for whether the cluster-version operator will attempt to evaluate any risks[].matchingRules. * Recommended, for whether the update is recommended for the current cluster. Type array 6.1.11. .status.conditionalUpdates[].conditions[] Description Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } Type object Required lastTransitionTime message reason status type Property Type Description lastTransitionTime string lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string message is a human readable message indicating details about the transition. This may be an empty string. observedGeneration integer observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. --- Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deconflict is important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt) 6.1.12. .status.conditionalUpdates[].release Description release is the target of the update. Type object Property Type Description channels array (string) channels is the set of Cincinnati channels to which the release currently belongs. image string image is a container image location that contains the update. When this field is part of spec, image is optional if version is specified and the availableUpdates field contains a matching version. url string url contains information about this release. This URL is set by the 'url' metadata property on a release or the metadata returned by the update API and should be displayed as a link in user interfaces. The URL field may not be set for test or nightly releases. version string version is a semantic version identifying the update version. When this field is part of spec, version is optional if image is specified. 6.1.13. .status.conditionalUpdates[].risks Description risks represents the range of issues associated with updating to the target release. The cluster-version operator will evaluate all entries, and only recommend the update if there is at least one entry and all entries recommend the update. Type array 6.1.14. .status.conditionalUpdates[].risks[] Description ConditionalUpdateRisk represents a reason and cluster-state for not recommending a conditional update. Type object Required matchingRules message name url Property Type Description matchingRules array matchingRules is a slice of conditions for deciding which clusters match the risk and which do not. The slice is ordered by decreasing precedence. The cluster-version operator will walk the slice in order, and stop after the first it can successfully evaluate. If no condition can be successfully evaluated, the update will not be recommended. matchingRules[] object ClusterCondition is a union of typed cluster conditions. The 'type' property determines which of the type-specific properties are relevant. When evaluated on a cluster, the condition may match, not match, or fail to evaluate. message string message provides additional information about the risk of updating, in the event that matchingRules match the cluster state. This is only to be consumed by humans. It may contain Line Feed characters (U+000A), which should be rendered as new lines. name string name is the CamelCase reason for not recommending a conditional update, in the event that matchingRules match the cluster state. url string url contains information about this risk. 6.1.15. .status.conditionalUpdates[].risks[].matchingRules Description matchingRules is a slice of conditions for deciding which clusters match the risk and which do not. The slice is ordered by decreasing precedence. The cluster-version operator will walk the slice in order, and stop after the first it can successfully evaluate. If no condition can be successfully evaluated, the update will not be recommended. Type array 6.1.16. .status.conditionalUpdates[].risks[].matchingRules[] Description ClusterCondition is a union of typed cluster conditions. The 'type' property determines which of the type-specific properties are relevant. When evaluated on a cluster, the condition may match, not match, or fail to evaluate. Type object Required type Property Type Description promql object promQL represents a cluster condition based on PromQL. type string type represents the cluster-condition type. This defines the members and semantics of any additional properties. 6.1.17. .status.conditionalUpdates[].risks[].matchingRules[].promql Description promQL represents a cluster condition based on PromQL. Type object Required promql Property Type Description promql string PromQL is a PromQL query classifying clusters. This query query should return a 1 in the match case and a 0 in the does-not-match case. Queries which return no time series, or which return values besides 0 or 1, are evaluation failures. 6.1.18. .status.conditions Description conditions provides information about the cluster version. The condition "Available" is set to true if the desiredUpdate has been reached. The condition "Progressing" is set to true if an update is being applied. The condition "Degraded" is set to true if an update is currently blocked by a temporary or permanent error. Conditions are only valid for the current desiredUpdate when metadata.generation is equal to status.generation. Type array 6.1.19. .status.conditions[] Description ClusterOperatorStatusCondition represents the state of the operator's managed and monitored components. Type object Required lastTransitionTime status type Property Type Description lastTransitionTime string lastTransitionTime is the time of the last update to the current status property. message string message provides additional information about the current condition. This is only to be consumed by humans. It may contain Line Feed characters (U+000A), which should be rendered as new lines. reason string reason is the CamelCase reason for the condition's current status. status string status of the condition, one of True, False, Unknown. type string type specifies the aspect reported by this condition. 6.1.20. .status.desired Description desired is the version that the cluster is reconciling towards. If the cluster is not yet fully initialized desired will be set with the information available, which may be an image or a tag. Type object Property Type Description channels array (string) channels is the set of Cincinnati channels to which the release currently belongs. image string image is a container image location that contains the update. When this field is part of spec, image is optional if version is specified and the availableUpdates field contains a matching version. url string url contains information about this release. This URL is set by the 'url' metadata property on a release or the metadata returned by the update API and should be displayed as a link in user interfaces. The URL field may not be set for test or nightly releases. version string version is a semantic version identifying the update version. When this field is part of spec, version is optional if image is specified. 6.1.21. .status.history Description history contains a list of the most recent versions applied to the cluster. This value may be empty during cluster startup, and then will be updated when a new update is being applied. The newest update is first in the list and it is ordered by recency. Updates in the history have state Completed if the rollout completed - if an update was failing or halfway applied the state will be Partial. Only a limited amount of update history is preserved. Type array 6.1.22. .status.history[] Description UpdateHistory is a single attempted update to the cluster. Type object Required image startedTime state verified Property Type Description acceptedRisks string acceptedRisks records risks which were accepted to initiate the update. For example, it may menition an Upgradeable=False or missing signature that was overriden via desiredUpdate.force, or an update that was initiated despite not being in the availableUpdates set of recommended update targets. completionTime `` completionTime, if set, is when the update was fully applied. The update that is currently being applied will have a null completion time. Completion time will always be set for entries that are not the current update (usually to the started time of the update). image string image is a container image location that contains the update. This value is always populated. startedTime string startedTime is the time at which the update was started. state string state reflects whether the update was fully applied. The Partial state indicates the update is not fully applied, while the Completed state indicates the update was successfully rolled out at least once (all parts of the update successfully applied). verified boolean verified indicates whether the provided update was properly verified before it was installed. If this is false the cluster may not be trusted. Verified does not cover upgradeable checks that depend on the cluster state at the time when the update target was accepted. version string version is a semantic version identifying the update version. If the requested image does not define a version, or if a failure occurs retrieving the image, this value may be empty. 6.2. API endpoints The following API endpoints are available: /apis/config.openshift.io/v1/clusterversions DELETE : delete collection of ClusterVersion GET : list objects of kind ClusterVersion POST : create a ClusterVersion /apis/config.openshift.io/v1/clusterversions/{name} DELETE : delete a ClusterVersion GET : read the specified ClusterVersion PATCH : partially update the specified ClusterVersion PUT : replace the specified ClusterVersion /apis/config.openshift.io/v1/clusterversions/{name}/status GET : read status of the specified ClusterVersion PATCH : partially update status of the specified ClusterVersion PUT : replace status of the specified ClusterVersion 6.2.1. /apis/config.openshift.io/v1/clusterversions HTTP method DELETE Description delete collection of ClusterVersion Table 6.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind ClusterVersion Table 6.2. HTTP responses HTTP code Reponse body 200 - OK ClusterVersionList schema 401 - Unauthorized Empty HTTP method POST Description create a ClusterVersion Table 6.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.4. Body parameters Parameter Type Description body ClusterVersion schema Table 6.5. HTTP responses HTTP code Reponse body 200 - OK ClusterVersion schema 201 - Created ClusterVersion schema 202 - Accepted ClusterVersion schema 401 - Unauthorized Empty 6.2.2. /apis/config.openshift.io/v1/clusterversions/{name} Table 6.6. Global path parameters Parameter Type Description name string name of the ClusterVersion HTTP method DELETE Description delete a ClusterVersion Table 6.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 6.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ClusterVersion Table 6.9. HTTP responses HTTP code Reponse body 200 - OK ClusterVersion schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ClusterVersion Table 6.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.11. HTTP responses HTTP code Reponse body 200 - OK ClusterVersion schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ClusterVersion Table 6.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.13. Body parameters Parameter Type Description body ClusterVersion schema Table 6.14. HTTP responses HTTP code Reponse body 200 - OK ClusterVersion schema 201 - Created ClusterVersion schema 401 - Unauthorized Empty 6.2.3. /apis/config.openshift.io/v1/clusterversions/{name}/status Table 6.15. Global path parameters Parameter Type Description name string name of the ClusterVersion HTTP method GET Description read status of the specified ClusterVersion Table 6.16. HTTP responses HTTP code Reponse body 200 - OK ClusterVersion schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified ClusterVersion Table 6.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.18. HTTP responses HTTP code Reponse body 200 - OK ClusterVersion schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified ClusterVersion Table 6.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.20. Body parameters Parameter Type Description body ClusterVersion schema Table 6.21. HTTP responses HTTP code Reponse body 200 - OK ClusterVersion schema 201 - Created ClusterVersion schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/config_apis/clusterversion-config-openshift-io-v1
Chapter 1. Security best practices
Chapter 1. Security best practices Get an overview of key best security practices for Red Hat OpenShift Dev Spaces that can help you foster a more resilient development environment. Red Hat OpenShift Dev Spaces runs on top of OpenShift, which provides the platform, and the foundation for the products functioning on top of it. OpenShift documentation is the entry point for security hardening. Project isolation in OpenShift In OpenShift, project isolation is similar to namespace isolation in Kubernetes but is achieved through the concept of projects. A project in OpenShift is a top-level organizational unit that provides isolation and collaboration between different applications, teams, or workloads within a cluster. By default, OpenShift Dev Spaces provisions a unique <username>-devspaces project for each user. Alternatively, the cluster administrator can disable project self-provisioning on the OpenShift level, and turn off automatic namespace provisioning in the CheCluster custom resource: devEnvironments: defaultNamespace: autoProvision: false With this setup, you achieve a curated access to OpenShift Dev Spaces, where cluster administrators control provisioning for each user and can explicitly configure various settings including resource limits and quotas. Learn more about project provisioning in the product documentation . Role-based access control (RBAC) By default, the OpenShift Dev Spaces operator creates the following ClusterRoles: <namespace>-cheworkspaces-clusterrole <namespace>-cheworkspaces-devworkspace-clusterrole Note The <namespace> prefix corresponds to the project name where the Red Hat OpenShift Dev Spaces CheCluster CR is located. The first time a user accesses Red Hat OpenShift Dev Spaces, the corresponding RoleBinding is created in the <username>-devspaces project. All resources and actions you can grant users permission to use in their namespace is listed below. Table 1.1. Overview of resources and actions available in a user's namespace. Resources Actions pods "get", "list", "watch", "create", "delete", "update", "patch" pods/exec "get", "create" pods/log "get", "list", "watch" pods/portforward "get", "list", "create" configmaps "get", "list", "create", "update", "patch", "delete" events "watch" secrets "get", "list", "create", "update", "patch", "delete" services "get", "list", "create", "delete", "update", "patch" routes "get", "list", "create", "delete" persistentvolumeclaims "get", "list", "watch", "create", "delete", "update", "patch" apps/deployments "get", "list", "watch", "create", "patch", "delete" apps/replicasets "get", "list", "patch", "delete" namespaces "get", "list" projects "get" devworkspace "get", "create", "delete", "list", "update", "patch", "watch" devworkspacetemplates "get", "create", "delete", "list", "update", "patch", "watch" Important Each user is granted permissions only to their namespace, and can not access other user's resources. Cluster administrators can add extra permissions to users. They should not remove permissions granted by default. Refer to the product documentation for configuring cluster roles for Red Hat OpenShift Dev Spaces users. More details about the role-based access control are available in the OpenShift documentation . Dev environment isolation Isolation of the development environments is implemented using OpenShift projects. Every developer has a project in which the following objects are created and managed: Cloud Development Environment (CDE) Pods, including the IDE server. Secrets containing developer credentials, such as a Git token, SSH keys, and a Kubernetes token. ConfigMaps with developer-specific configuration, such as the Git name and email. Volumes that persist data such as the source code, even when the CDE Pod is stopped. Important Access to the resources in a namespace must be limited to the developer owning it. Granting read access to another developer is equivalent to sharing the developer credentials and should be avoided. Enhanced authorization The current trend is to split an infrastructure into several "fit for purpose" clusters instead of having a gigantic monolith OpenShift cluster. However, administrators might still want to provide granular access, and restrict the availability of certain functionalities to particular users. Note A "fit for purpose" OpenShift cluster refers to a cluster that is specifically designed and configured to meet the requirements and goals of a particular use case or workload. It is tailored to optimize performance, resource utilization, and other factors based on the characteristics of the workloads it will be managing. For Red Hat OpenShift Dev Spaces, it is recommended to have this type of cluster provisioned. For this purpose, optional properties that you can use to set up granular access for different groups and users are available in the CheCluster Custom Resource: allowUsers allowGroups denyUsers denyGroups Below is an example of access configuration: networking: auth: advancedAuthorization: allowUsers: - user-a - user-b denyUsers: - user-c allowGroups: - openshift-group-a - openshift-group-b denyGroups: - openshift-group-c Note Users in the denyUsers and denyGroup categories will not be able to use Red Hat OpenShift Dev Spaces and will see a warning when trying to access the User Dashboard. Authentication Only authenticated OpenShift users can access Red Hat OpenShift Dev Spaces. The Gateway Pod uses a role-based access control (RBAC) subsystem to determine whether a developer is authorized to access a Cloud Development Environment (CDE) or not. The CDE Gateway container checks the developer's Kubernetes roles. If their roles allow access to the CDE Pod, the connection to the development environment is allowed. By default, only the owner of the namespace has access to the CDE Pod. Important Access to the resources in a namespace must be limited to the developer owning it. Granting read access to another developer is equivalent to sharing the developer credentials and should be avoided. Security context and security context constraint Red Hat OpenShift Dev Spaces adds SETGID and SETUID capabilities to the specification of the CDE Pod container security context: "spec": { "containers": [ "securityContext": { "allowPrivilegeEscalation": true, "capabilities": { "add": ["SETGID", "SETUID"], "drop": ["ALL","KILL","MKNOD"] }, "readOnlyRootFilesystem": false, "runAsNonRoot": true, "runAsUser": 1001110000 } ] } This provides the ability for users to build container images from within a CDE. By default, Red Hat OpenShift Dev Spaces assigns a specific SecurityContextConstraint (SCC) to the users that allows them to start a Pod with such capabilities. This SCC grants more capabilities to the users compared to the default restricted SCC but less capability compared to the anyuid SCC. This default SCC is pre-created in the OpenShift Dev Spaces namespace and named container-build . Setting the following property in the CheCluster Custom Resource prevents assigning extra capabilities and SCC to users: spec: devEnvironments: disableContainerBuildCapabilities: true Resource Quotas and Limit Ranges Resource Quotas and Limit Ranges are Kubernetes features you can use to help prevent bad actors or resource abuse within a cluster. They help in controlling and managing resource consumption by pods and containers. By combining Resource Quotas and Limit Ranges, you can enforce project-specific policies to prevent bad actors from consuming excessive resources. These mechanisms contribute to better resource management, stability, and fairness within an OpenShift cluster. More details about Resource Quotas and Limit Ranges are available in the OpenShift documentation. Disconnected environment An air-gapped OpenShift disconnected cluster refers to an OpenShift cluster isolated from the internet or any external network. This isolation is often done for security reasons, to protect sensitive or critical systems from potential cyber threats. In an air-gapped environment, the cluster cannot access external repositories or registries to download container images, updates, or dependencies. Red Hat OpenShift Dev Spaces is supported and can be installed in a restricted environment. Installation instructions are available in the official documentation . Managing extensions By default, Red Hat OpenShift Dev Spaces includes the embedded Open VSX registry which contains a limited set of extensions used by Microsoft Visual Studio Code - Open Source editor. Alternatively, cluster administrators can specify a different plugin registry in the Custom Resource, e.g. https://open-vsx.org that contains thousands of extensions. They can also build a custom Open VSX registry. More details about managing IDE extensions are available in the official documentation . Important Installing extra extensions increases potential risks. To minimize these risks, make sure to only install extensions from reliable sources and regularly update them. Secrets Keep sensitive data stored as Kubernetes secrets in the users' namespaces confidential (e.g. Personal Access Tokens (PAT), and SSH keys). Git repositories It is crucial to operate within Git repositories that you are familiar with and that you trust. Before incorporating new dependencies into the repository, verify that they are well-maintained and regularly release updates to address any identified security vulnerabilities in their code.
[ "devEnvironments: defaultNamespace: autoProvision: false", "networking: auth: advancedAuthorization: allowUsers: - user-a - user-b denyUsers: - user-c allowGroups: - openshift-group-a - openshift-group-b denyGroups: - openshift-group-c", "\"spec\": { \"containers\": [ \"securityContext\": { \"allowPrivilegeEscalation\": true, \"capabilities\": { \"add\": [\"SETGID\", \"SETUID\"], \"drop\": [\"ALL\",\"KILL\",\"MKNOD\"] }, \"readOnlyRootFilesystem\": false, \"runAsNonRoot\": true, \"runAsUser\": 1001110000 } ] }", "spec: devEnvironments: disableContainerBuildCapabilities: true" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_dev_spaces/3.19/html/administration_guide/security-best-practices
Chapter 11. Internationalization
Chapter 11. Internationalization 11.1. Red Hat Enterprise Linux 8 international languages Red Hat Enterprise Linux 8 supports the installation of multiple languages and the changing of languages based on your requirements. East Asian Languages - Japanese, Korean, Simplified Chinese, and Traditional Chinese. European Languages - English, German, Spanish, French, Italian, Portuguese, and Russian. The following table lists the fonts and input methods provided for various major languages. Language Default Font (Font Package) Input Methods English dejavu-sans-fonts French dejavu-sans-fonts German dejavu-sans-fonts Italian dejavu-sans-fonts Russian dejavu-sans-fonts Spanish dejavu-sans-fonts Portuguese dejavu-sans-fonts Simplified Chinese google-noto-sans-cjk-ttc-fonts, google-noto-serif-cjk-ttc-fonts ibus-libpinyin, libpinyin Traditional Chinese google-noto-sans-cjk-ttc-fonts, google-noto-serif-cjk-ttc-fonts ibus-libzhuyin, libzhuyin Japanese google-noto-sans-cjk-ttc-fonts, google-noto-serif-cjk-ttc-fonts ibus-kkc, libkkc Korean google-noto-sans-cjk-ttc-fonts, google-noto-serif-cjk-ttc-fonts ibus-hangul, libhangul 11.2. Notable changes to internationalization in RHEL 8 RHEL 8 introduces the following changes to internationalization compared to RHEL 7: Support for the Unicode 11 computing industry standard has been added. Internationalization is distributed in multiple packages, which allows for smaller footprint installations. For more information, see Using langpacks . A number of glibc locales have been synchronized with Unicode Common Locale Data Repository (CLDR).
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.6_release_notes/internationalization
Chapter 9. Changing the MTU for the cluster network
Chapter 9. Changing the MTU for the cluster network As a cluster administrator, you can change the MTU for the cluster network after cluster installation. This change is disruptive as cluster nodes must be rebooted to finalize the MTU change. You can change the MTU only for clusters that use the OVN-Kubernetes plugin or the OpenShift SDN network plugin. 9.1. About the cluster MTU During installation the maximum transmission unit (MTU) for the cluster network is detected automatically based on the MTU of the primary network interface of nodes in the cluster. You do not usually need to override the detected MTU. You might want to change the MTU of the cluster network for several reasons: The MTU detected during cluster installation is not correct for your infrastructure. Your cluster infrastructure now requires a different MTU, such as from the addition of nodes that need a different MTU for optimal performance. 9.1.1. Service interruption considerations When you initiate an MTU change on your cluster the following effects might impact service availability: At least two rolling reboots are required to complete the migration to a new MTU. During this time, some nodes are not available as they restart. Specific applications deployed to the cluster with shorter timeout intervals than the absolute TCP timeout interval might experience disruption during the MTU change. 9.1.2. MTU value selection When planning your MTU migration there are two related but distinct MTU values to consider. Hardware MTU : This MTU value is set based on the specifics of your network infrastructure. Cluster network MTU : This MTU value is always less than your hardware MTU to account for the cluster network overlay overhead. The specific overhead is determined by your network plugin. For OVN-Kubernetes, the overhead is 100 bytes. For OpenShift SDN, the overhead is 50 bytes. If your cluster requires different MTU values for different nodes, you must subtract the overhead value for your network plugin from the lowest MTU value that is used by any node in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . Important To avoid selecting an MTU value that is not acceptable by a node, verify the maximum MTU value ( maxmtu ) that is accepted by the network interface by using the ip -d link command. 9.1.3. How the migration process works The following table summarizes the migration process by segmenting between the user-initiated steps in the process and the actions that the migration performs in response. Table 9.1. Live migration of the cluster MTU User-initiated steps OpenShift Container Platform activity Set the following values in the Cluster Network Operator configuration: spec.migration.mtu.machine.to spec.migration.mtu.network.from spec.migration.mtu.network.to Cluster Network Operator (CNO) : Confirms that each field is set to a valid value. The mtu.machine.to must be set to either the new hardware MTU or to the current hardware MTU if the MTU for the hardware is not changing. This value is transient and is used as part of the migration process. Separately, if you specify a hardware MTU that is different from your existing hardware MTU value, you must manually configure the MTU to persist by other means, such as with a machine config, DHCP setting, or a Linux kernel command line. The mtu.network.from field must equal the network.status.clusterNetworkMTU field, which is the current MTU of the cluster network. The mtu.network.to field must be set to the target cluster network MTU and must be lower than the hardware MTU to allow for the overlay overhead of the network plugin. The overhead for OVN-Kubernetes is 100 bytes and for OpenShift SDN is 50 bytes. If the values provided are valid, the CNO writes out a new temporary configuration with the MTU for the cluster network set to the value of the mtu.network.to field. Machine Config Operator (MCO) : Performs a rolling reboot of each node in the cluster. Reconfigure the MTU of the primary network interface for the nodes on the cluster. You can use a variety of methods to accomplish this, including: Deploying a new NetworkManager connection profile with the MTU change Changing the MTU through a DHCP server setting Changing the MTU through boot parameters N/A Set the mtu value in the CNO configuration for the network plugin and set spec.migration to null . Machine Config Operator (MCO) : Performs a rolling reboot of each node in the cluster with the new MTU configuration. 9.2. Changing the cluster network MTU As a cluster administrator, you can increase or decrease the maximum transmission unit (MTU) for your cluster. Important The migration is disruptive and nodes in your cluster might be temporarily unavailable as the MTU update takes effect. The following procedure describes how to change the cluster network MTU by using either machine configs, Dynamic Host Configuration Protocol (DHCP), or an ISO image. If you use either the DHCP or ISO approaches, you must refer to configuration artifacts that you kept after installing your cluster to complete the procedure. Prerequisites You have installed the OpenShift CLI ( oc ). You have access to the cluster using an account with cluster-admin permissions. You have identified the target MTU for your cluster. The MTU for the OVN-Kubernetes network plugin must be set to 100 less than the lowest hardware MTU value in your cluster. The MTU for the OpenShift SDN network plugin must be set to 50 less than the lowest hardware MTU value in your cluster. Procedure To obtain the current MTU for the cluster network, enter the following command: USD oc describe network.config cluster Example output ... Status: Cluster Network: Cidr: 10.217.0.0/22 Host Prefix: 23 Cluster Network MTU: 1400 Network Type: OVNKubernetes Service Network: 10.217.4.0/23 ... Prepare your configuration for the hardware MTU: If your hardware MTU is specified with DHCP, update your DHCP configuration such as with the following dnsmasq configuration: dhcp-option-force=26,<mtu> where: <mtu> Specifies the hardware MTU for the DHCP server to advertise. If your hardware MTU is specified with a kernel command line with PXE, update that configuration accordingly. If your hardware MTU is specified in a NetworkManager connection configuration, complete the following steps. This approach is the default for OpenShift Container Platform if you do not explicitly specify your network configuration with DHCP, a kernel command line, or some other method. Your cluster nodes must all use the same underlying network configuration for the following procedure to work unmodified. Find the primary network interface: If you are using the OpenShift SDN network plugin, enter the following command: USD oc debug node/<node_name> -- chroot /host ip route list match 0.0.0.0/0 | awk '{print USD5 }' where: <node_name> Specifies the name of a node in your cluster. If you are using the OVN-Kubernetes network plugin, enter the following command: USD oc debug node/<node_name> -- chroot /host nmcli -g connection.interface-name c show ovs-if-phys0 where: <node_name> Specifies the name of a node in your cluster. Create the following NetworkManager configuration in the <interface>-mtu.conf file: Example NetworkManager connection configuration [connection-<interface>-mtu] match-device=interface-name:<interface> ethernet.mtu=<mtu> where: <mtu> Specifies the new hardware MTU value. <interface> Specifies the primary network interface name. Create two MachineConfig objects, one for the control plane nodes and another for the worker nodes in your cluster: Create the following Butane config in the control-plane-interface.bu file: variant: openshift version: 4.16.0 metadata: name: 01-control-plane-interface labels: machineconfiguration.openshift.io/role: master storage: files: - path: /etc/NetworkManager/conf.d/99-<interface>-mtu.conf 1 contents: local: <interface>-mtu.conf 2 mode: 0600 1 Specify the NetworkManager connection name for the primary network interface. 2 Specify the local filename for the updated NetworkManager configuration file from the step. Create the following Butane config in the worker-interface.bu file: variant: openshift version: 4.16.0 metadata: name: 01-worker-interface labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/NetworkManager/conf.d/99-<interface>-mtu.conf 1 contents: local: <interface>-mtu.conf 2 mode: 0600 1 Specify the NetworkManager connection name for the primary network interface. 2 Specify the local filename for the updated NetworkManager configuration file from the step. Create MachineConfig objects from the Butane configs by running the following command: USD for manifest in control-plane-interface worker-interface; do butane --files-dir . USDmanifest.bu > USDmanifest.yaml done Warning Do not apply these machine configs until explicitly instructed later in this procedure. Applying these machine configs now causes a loss of stability for the cluster. To begin the MTU migration, specify the migration configuration by entering the following command. The Machine Config Operator performs a rolling reboot of the nodes in the cluster in preparation for the MTU change. USD oc patch Network.operator.openshift.io cluster --type=merge --patch \ '{"spec": { "migration": { "mtu": { "network": { "from": <overlay_from>, "to": <overlay_to> } , "machine": { "to" : <machine_to> } } } } }' where: <overlay_from> Specifies the current cluster network MTU value. <overlay_to> Specifies the target MTU for the cluster network. This value is set relative to the value of <machine_to> . For OVN-Kubernetes, this value must be 100 less than the value of <machine_to> . For OpenShift SDN, this value must be 50 less than the value of <machine_to> . <machine_to> Specifies the MTU for the primary network interface on the underlying host network. Example that increases the cluster MTU USD oc patch Network.operator.openshift.io cluster --type=merge --patch \ '{"spec": { "migration": { "mtu": { "network": { "from": 1400, "to": 9000 } , "machine": { "to" : 9100} } } } }' As the Machine Config Operator updates machines in each machine config pool, it reboots each node one by one. You must wait until all the nodes are updated. Check the machine config pool status by entering the following command: USD oc get machineconfigpools A successfully updated node has the following status: UPDATED=true , UPDATING=false , DEGRADED=false . Note By default, the Machine Config Operator updates one machine per pool at a time, causing the total time the migration takes to increase with the size of the cluster. Confirm the status of the new machine configuration on the hosts: To list the machine configuration state and the name of the applied machine configuration, enter the following command: USD oc describe node | egrep "hostname|machineconfig" Example output kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done Verify that the following statements are true: The value of machineconfiguration.openshift.io/state field is Done . The value of the machineconfiguration.openshift.io/currentConfig field is equal to the value of the machineconfiguration.openshift.io/desiredConfig field. To confirm that the machine config is correct, enter the following command: USD oc get machineconfig <config_name> -o yaml | grep ExecStart where <config_name> is the name of the machine config from the machineconfiguration.openshift.io/currentConfig field. The machine config must include the following update to the systemd configuration: ExecStart=/usr/local/bin/mtu-migration.sh Update the underlying network interface MTU value: If you are specifying the new MTU with a NetworkManager connection configuration, enter the following command. The MachineConfig Operator automatically performs a rolling reboot of the nodes in your cluster. USD for manifest in control-plane-interface worker-interface; do oc create -f USDmanifest.yaml done If you are specifying the new MTU with a DHCP server option or a kernel command line and PXE, make the necessary changes for your infrastructure. As the Machine Config Operator updates machines in each machine config pool, it reboots each node one by one. You must wait until all the nodes are updated. Check the machine config pool status by entering the following command: USD oc get machineconfigpools A successfully updated node has the following status: UPDATED=true , UPDATING=false , DEGRADED=false . Note By default, the Machine Config Operator updates one machine per pool at a time, causing the total time the migration takes to increase with the size of the cluster. Confirm the status of the new machine configuration on the hosts: To list the machine configuration state and the name of the applied machine configuration, enter the following command: USD oc describe node | egrep "hostname|machineconfig" Example output kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done Verify that the following statements are true: The value of machineconfiguration.openshift.io/state field is Done . The value of the machineconfiguration.openshift.io/currentConfig field is equal to the value of the machineconfiguration.openshift.io/desiredConfig field. To confirm that the machine config is correct, enter the following command: USD oc get machineconfig <config_name> -o yaml | grep path: where <config_name> is the name of the machine config from the machineconfiguration.openshift.io/currentConfig field. If the machine config is successfully deployed, the output contains the /etc/NetworkManager/conf.d/99-<interface>-mtu.conf file path and the ExecStart=/usr/local/bin/mtu-migration.sh line. Finalize the MTU migration for your plugin. In both example commands, <mtu> specifies the new cluster network MTU that you specified with <overlay_to> . To finalize the MTU migration, enter the following command for the OVN-Kubernetes network plugin: USD oc patch Network.operator.openshift.io cluster --type=merge --patch \ '{"spec": { "migration": null, "defaultNetwork":{ "ovnKubernetesConfig": { "mtu": <mtu> }}}}' To finalize the MTU migration, enter the following command for the OpenShift SDN network plugin: USD oc patch Network.operator.openshift.io cluster --type=merge --patch \ '{"spec": { "migration": null, "defaultNetwork":{ "openshiftSDNConfig": { "mtu": <mtu> }}}}' After finalizing the MTU migration, each machine config pool node is rebooted one by one. You must wait until all the nodes are updated. Check the machine config pool status by entering the following command: USD oc get machineconfigpools A successfully updated node has the following status: UPDATED=true , UPDATING=false , DEGRADED=false . Verification To get the current MTU for the cluster network, enter the following command: USD oc describe network.config cluster Get the current MTU for the primary network interface of a node: To list the nodes in your cluster, enter the following command: USD oc get nodes To obtain the current MTU setting for the primary network interface on a node, enter the following command: USD oc debug node/<node> -- chroot /host ip address show <interface> where: <node> Specifies a node from the output from the step. <interface> Specifies the primary network interface name for the node. Example output ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8051 9.3. Additional resources Using advanced networking options for PXE and ISO installations Manually creating NetworkManager profiles in key file format Configuring a dynamic Ethernet connection using nmcli
[ "oc describe network.config cluster", "Status: Cluster Network: Cidr: 10.217.0.0/22 Host Prefix: 23 Cluster Network MTU: 1400 Network Type: OVNKubernetes Service Network: 10.217.4.0/23", "dhcp-option-force=26,<mtu>", "oc debug node/<node_name> -- chroot /host ip route list match 0.0.0.0/0 | awk '{print USD5 }'", "oc debug node/<node_name> -- chroot /host nmcli -g connection.interface-name c show ovs-if-phys0", "[connection-<interface>-mtu] match-device=interface-name:<interface> ethernet.mtu=<mtu>", "variant: openshift version: 4.16.0 metadata: name: 01-control-plane-interface labels: machineconfiguration.openshift.io/role: master storage: files: - path: /etc/NetworkManager/conf.d/99-<interface>-mtu.conf 1 contents: local: <interface>-mtu.conf 2 mode: 0600", "variant: openshift version: 4.16.0 metadata: name: 01-worker-interface labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/NetworkManager/conf.d/99-<interface>-mtu.conf 1 contents: local: <interface>-mtu.conf 2 mode: 0600", "for manifest in control-plane-interface worker-interface; do butane --files-dir . USDmanifest.bu > USDmanifest.yaml done", "oc patch Network.operator.openshift.io cluster --type=merge --patch '{\"spec\": { \"migration\": { \"mtu\": { \"network\": { \"from\": <overlay_from>, \"to\": <overlay_to> } , \"machine\": { \"to\" : <machine_to> } } } } }'", "oc patch Network.operator.openshift.io cluster --type=merge --patch '{\"spec\": { \"migration\": { \"mtu\": { \"network\": { \"from\": 1400, \"to\": 9000 } , \"machine\": { \"to\" : 9100} } } } }'", "oc get machineconfigpools", "oc describe node | egrep \"hostname|machineconfig\"", "kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done", "oc get machineconfig <config_name> -o yaml | grep ExecStart", "ExecStart=/usr/local/bin/mtu-migration.sh", "for manifest in control-plane-interface worker-interface; do oc create -f USDmanifest.yaml done", "oc get machineconfigpools", "oc describe node | egrep \"hostname|machineconfig\"", "kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done", "oc get machineconfig <config_name> -o yaml | grep path:", "oc patch Network.operator.openshift.io cluster --type=merge --patch '{\"spec\": { \"migration\": null, \"defaultNetwork\":{ \"ovnKubernetesConfig\": { \"mtu\": <mtu> }}}}'", "oc patch Network.operator.openshift.io cluster --type=merge --patch '{\"spec\": { \"migration\": null, \"defaultNetwork\":{ \"openshiftSDNConfig\": { \"mtu\": <mtu> }}}}'", "oc get machineconfigpools", "oc describe network.config cluster", "oc get nodes", "oc debug node/<node> -- chroot /host ip address show <interface>", "ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8051" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/networking/changing-cluster-network-mtu
Chapter 24. KafkaAuthorizationCustom schema reference
Chapter 24. KafkaAuthorizationCustom schema reference Used in: KafkaClusterSpec Full list of KafkaAuthorizationCustom schema properties Configures the Kafka custom resource to use a custom authorizer and define Access Control Lists (ACLs). ACLs allow you to define which users have access to which resources at a granular level. Configure the Kafka custom resource to specify an authorizer class that implements the org.apache.kafka.server.authorizer.Authorizer interface to support custom ACLs. Set the type property in the authorization section to the value custom , and configure a list of super users. Super users are always allowed without querying ACL rules. Add additional configuration for initializing the custom authorizer using Kafka.spec.kafka.config . Example custom authorization configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: myproject spec: kafka: # ... authorization: type: custom authorizerClass: io.mycompany.CustomAuthorizer superUsers: - CN=user-1 - user-2 - CN=user-3 # ... config: authorization.custom.property1=value1 authorization.custom.property2=value2 # ... Note The super.user configuration option in the config property in Kafka.spec.kafka is ignored. Designate super users in the authorization property instead. 24.1. Adding custom authorizer JAR files to the container image In addition to the Kafka custom resource configuration, the JAR files containing the custom authorizer class along with its dependencies must be available on the classpath of the Kafka broker. You can add them by building Streams for Apache Kafka from the source-code. The Streams for Apache Kafka build process provides a mechanism to add custom third-party libraries to the generated Kafka broker container image by adding them as dependencies in the pom.xml file under the docker-images/artifacts/kafka-thirdparty-libs directory. The directory contains different folders for different Kafka versions. Choose the appropriate folder. Before modifying the pom.xml file, the third-party library must be available in a Maven repository, and that Maven repository must be accessible to the Streams for Apache Kafka build process. Alternatively, you can add the JARs to an existing Streams for Apache Kafka container image: 24.2. Using custom authorizers with OAuth authentication When using oauth authentication with a groupsClaim configuration to extract user group information from JWT tokens, group information can be used in custom authorization calls. Groups are accessible through the OAuthKafkaPrincipal object during custom authorization calls, as follows: 24.3. KafkaAuthorizationCustom schema properties The type property is a discriminator that distinguishes use of the KafkaAuthorizationCustom type from KafkaAuthorizationSimple , KafkaAuthorizationOpa , KafkaAuthorizationKeycloak . It must have the value custom for the type KafkaAuthorizationCustom . Property Property type Description type string Must be custom . authorizerClass string Authorization implementation class, which must be available in classpath. superUsers string array List of super users, which are user principals with unlimited access rights. supportsAdminApi boolean Indicates whether the custom authorizer supports the APIs for managing ACLs using the Kafka Admin API. Defaults to false .
[ "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: myproject spec: kafka: # authorization: type: custom authorizerClass: io.mycompany.CustomAuthorizer superUsers: - CN=user-1 - user-2 - CN=user-3 # config: authorization.custom.property1=value1 authorization.custom.property2=value2 #", "FROM registry.redhat.io/amq-streams/kafka-39-rhel9:2.9.0 USER root:root COPY ./ my-authorizer / /opt/kafka/libs/ USER 1001", "public List<AuthorizationResult> authorize(AuthorizableRequestContext requestContext, List<Action> actions) { KafkaPrincipal principal = requestContext.principal(); if (principal instanceof OAuthKafkaPrincipal) { OAuthKafkaPrincipal p = (OAuthKafkaPrincipal) principal; for (String group: p.getGroups()) { System.out.println(\"Group: \" + group); } } }" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-kafkaauthorizationcustom-reference
Chapter 13. Volume Snapshots
Chapter 13. Volume Snapshots A volume snapshot is the state of the storage volume in a cluster at a particular point in time. These snapshots help to use storage more efficiently by not having to make a full copy each time and can be used as building blocks for developing an application. You can create multiple snapshots of the same persistent volume claim (PVC). For CephFS, you can create up to 100 snapshots per PVC. For RADOS Block Device (RBD), you can create up to 512 snapshots per PVC. Note You cannot schedule periodic creation of snapshots. 13.1. Creating volume snapshots You can create a volume snapshot either from the Persistent Volume Claim (PVC) page or the Volume Snapshots page. Prerequisites For a consistent snapshot, the PVC should be in Bound state and not be in use. Ensure to stop all IO before taking the snapshot. Note OpenShift Data Foundation only provides crash consistency for a volume snapshot of a PVC if a pod is using it. For application consistency, be sure to first tear down a running pod to ensure consistent snapshots or use any quiesce mechanism provided by the application to ensure it. Procedure From the Persistent Volume Claims page Click Storage Persistent Volume Claims from the OpenShift Web Console. To create a volume snapshot, do one of the following: Beside the desired PVC, click Action menu (...) Create Snapshot . Click on the PVC for which you want to create the snapshot and click Actions Create Snapshot . Enter a Name for the volume snapshot. Choose the Snapshot Class from the drop-down list. Click Create . You will be redirected to the Details page of the volume snapshot that is created. From the Volume Snapshots page Click Storage Volume Snapshots from the OpenShift Web Console. In the Volume Snapshots page, click Create Volume Snapshot . Choose the required Project from the drop-down list. Choose the Persistent Volume Claim from the drop-down list. Enter a Name for the snapshot. Choose the Snapshot Class from the drop-down list. Click Create . You will be redirected to the Details page of the volume snapshot that is created. Verification steps Go to the Details page of the PVC and click the Volume Snapshots tab to see the list of volume snapshots. Verify that the new volume snapshot is listed. Click Storage Volume Snapshots from the OpenShift Web Console. Verify that the new volume snapshot is listed. Wait for the volume snapshot to be in Ready state. 13.2. Restoring volume snapshots When you restore a volume snapshot, a new Persistent Volume Claim (PVC) gets created. The restored PVC is independent of the volume snapshot and the parent PVC. You can restore a volume snapshot from either the Persistent Volume Claim page or the Volume Snapshots page. Procedure From the Persistent Volume Claims page You can restore volume snapshot from the Persistent Volume Claims page only if the parent PVC is present. Click Storage Persistent Volume Claims from the OpenShift Web Console. Click on the PVC name with the volume snapshot to restore a volume snapshot as a new PVC. In the Volume Snapshots tab, click the Action menu (...) to the volume snapshot you want to restore. Click Restore as new PVC . Enter a name for the new PVC. Select the Storage Class name. Select the Access Mode of your choice. Important The ReadOnlyMany (ROX) access mode is a Developer Preview feature and is subject to Developer Preview support limitations. Developer Preview releases are not intended to be run in production environments and are not supported through the Red Hat Customer Portal case management system. If you need assistance with ReadOnlyMany feature, reach out to the [email protected] mailing list and a member of the Red Hat Development Team will assist you as quickly as possible based on availability and work schedules. See Creating a clone or restoring a snapshot with the new readonly access mode to use the ROX access mode. Optional: For RBD, select Volume mode . Click Restore . You are redirected to the new PVC details page. From the Volume Snapshots page Click Storage Volume Snapshots from the OpenShift Web Console. In the Volume Snapshots tab, click the Action menu (...) to the volume snapshot you want to restore. Click Restore as new PVC . Enter a name for the new PVC. Select the Storage Class name. Select the Access Mode of your choice. Important The ReadOnlyMany (ROX) access mode is a Developer Preview feature and is subject to Developer Preview support limitations. Developer Preview releases are not intended to be run in production environments and are not supported through the Red Hat Customer Portal case management system. If you need assistance with ReadOnlyMany feature, reach out to the [email protected] mailing list and a member of the Red Hat Development Team will assist you as quickly as possible based on availability and work schedules. See Creating a clone or restoring a snapshot with the new readonly access mode to use the ROX access mode. Optional: For RBD, select Volume mode . Click Restore . You are redirected to the new PVC details page. Verification steps Click Storage Persistent Volume Claims from the OpenShift Web Console and confirm that the new PVC is listed in the Persistent Volume Claims page. Wait for the new PVC to reach Bound state. 13.3. Deleting volume snapshots Prerequisites For deleting a volume snapshot, the volume snapshot class which is used in that particular volume snapshot should be present. Procedure From Persistent Volume Claims page Click Storage Persistent Volume Claims from the OpenShift Web Console. Click on the PVC name which has the volume snapshot that needs to be deleted. In the Volume Snapshots tab, beside the desired volume snapshot, click Action menu (...) Delete Volume Snapshot . From Volume Snapshots page Click Storage Volume Snapshots from the OpenShift Web Console. In the Volume Snapshots page, beside the desired volume snapshot click Action menu (...) Delete Volume Snapshot . Verfication steps Ensure that the deleted volume snapshot is not present in the Volume Snapshots tab of the PVC details page. Click Storage Volume Snapshots and ensure that the deleted volume snapshot is not listed.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/volume-snapshots_osp
Chapter 4. Configuring access to external applications with token-based authentication
Chapter 4. Configuring access to external applications with token-based authentication Token-based authentication permits authentication of third-party tools and services with the platform through integrated OAuth 2 token support, and allows you to access external applications without having to store your password on disk. For more information on the OAuth2 specification, see The OAuth 2.0 Authorization Framework . For more information on using the manage utility to create tokens, see Token and session management . 4.1. Applications Create and configure token-based authentication for external applications such as ServiceNow and Jenkins. With token-based authentication, external applications can easily integrate with Ansible Automation Platform. Important Automation controller OAuth applications on the platform UI are not supported for 2.4 to 2.5 migration. See this Knowledgebase article for more information. With OAuth 2 you can use tokens to share data with an application without disclosing login information. You can configure these tokens as read-only. You can create an application that is representative of the external application you are integrating with, then use it to create tokens for the application to use on behalf of its users. Associating these tokens with an application resource enables you to manage all tokens issued for a particular application. By separating the issue of tokens under OAuth Applications , you can revoke all tokens based on the application without having to revoke all tokens in the system. 4.1.1. Getting started with OAuth Applications You can access the OAuth Applications page from the navigation panel by selecting Access Management OAuth Applications . From there you can view, create, sort and search for applications currently managed by Ansible Automation Platform and automation controller. If no applications exist, you can create one by clicking Create OAuth application . Access rules for applications are as follows: Platform administrators can view and manipulate all applications in the system. Platform auditors can only view applications in the system. Tokens, on the other hand, are resources used to authenticate incoming requests and mask the permissions of the underlying user. Access rules for tokens are as follows: Users can create personal access tokens for themselves. Platform administrators are able to view and manipulate every token in the system. Platform auditors can only view tokens in the system. Other normal users are only able to view and manipulate their own tokens. Note Users can only view the token or refresh the token value at the time of creation. 4.1.1.1. Application functions Several OAuth 2 utilities are available for authorization, token refresh, and revoke. You can specify the following grant types when creating an application: Password This grant type is ideal for users who have native access to the web application and must be used when the client is the resource owner. Authorization code This grant type should be used when access tokens must be issued directly to an external application or service. Note You can only use the authorization code type to acquire an access token when using an application. When integrating an external web application with Ansible Automation Platform, that web application might need to create OAuth2 tokens on behalf of users in that other web application. Creating an application in the platform with the authorization code grant type is the preferred way to do this because: This allows an external application to obtain a token from Ansible Automation Platform for a user, using their credentials. Compartmentalized tokens issued for a particular application enables those tokens to be easily managed. For example, revoking all tokens associated with that application without having to revoke all tokens in the system. 4.1.1.1.1. Requesting an access token after expiration The Gateway access token expiration defaults to 600 seconds (10 minutes). The best way to set up application integrations using the Authorization code grant type is to allowlist the origins for those cross-site requests. More generally, you must allowlist the service or application you are integrating with the platform, for which you want to provide access tokens. To do this, have your administrator add this allowlist to their local Ansible Automation Platform settings file: CORS_ORIGIN_ALLOW_ALL = True CORS_ALLOWED_ORIGIN_REGEXES = [ r"http://django-oauth-toolkit.herokuapp.com*", r"http://www.example.com*" ] Where http://django-oauth-toolkit.herokuapp.com and http://www.example.com are applications requiring tokens with which to access the platform. 4.1.2. Creating a new application When integrating an external web application with Ansible Automation Platform, the web application might need to create OAuth2 tokens on behalf of users of the web application. Creating an application with the Authorization Code grant type is the preferred way to do this for the following reasons: External applications can obtain a token for users, using their credentials. Compartmentalized tokens issued for a particular application, enables those tokens to be easily managed. For example, revoking all tokens associated with that application. Procedure From the navigation panel, select Access Management OAuth Applications . Click Create OAuth application . The Create Application page opens. Enter the following details: Name (required) Enter a name for the application you want to create. Description (optional) Include a short description for your application. Organization (required) Select an organization with which this application is associated. Authorization grant type (required) Select one of the grant types to use for the user to get tokens for this application. For more information, see Application functions for more information about grant types. Client Type (required) Select the level of security of the client device. Redirect URIS Provide a list of allowed URIs, separated by spaces. You need this if you specified the grant type to be Authorization code . Click Create OAuth application , or click Cancel to abandon your changes. The Client ID and Client Secret display in a window. This will be the only time the client secret will be shown. Note The Client Secret is only created when the Client type is set to Confidential . Click the copy icon and save the client ID and client secret to integrate an external application with Ansible Automation Platform. 4.2. Adding tokens You can view a list of users that have tokens to access an application by selecting the Tokens tab in the OAuth Applications details page. Note You can only create OAuth 2 Tokens for your own user, which means you can only configure or view tokens from your own user profile. When authentication tokens have been configured, you can select the application to which the token is associated and the level of access that the token has. Procedure From the navigation panel, select Access Management Users . Select the username for your user profile to configure OAuth 2 tokens. Select the Tokens tab. When no tokens are present, the Tokens screen prompts you to add them. Click Create token to open the Create Token window. Enter the following details: Application Enter the name of the application with which you want to associate your token. Alternatively, you can search for it by clicking Browse . This opens a separate window that enables you to choose from the available options. Select Name from the filter list to filter by name if the list is extensive. Note To create a Personal Access Token (PAT) that is not linked to any application, leave the Application field blank. Description (optional) Provide a short description for your token. Scope (required) Specify the level of access you want this token to have. The scope of an OAuth 2 token can be set as one of the following: Write : Allows requests sent with this token to add, edit and delete resources in the system. Read : Limits actions to read only. Note that the write scope includes read scope. Click Create token , or click Cancel to abandon your changes. The Token information is displayed with Token and Refresh Token information, and the expiration date of the token. This will be the only time the token and refresh token will be shown. You can view the token association and token information from the list view. Click the copy icon and save the token and refresh token for future use. Verification You can verify that the application now shows the user with the appropriate token using the Tokens tab on the Applications details page. From the navigation panel, select Access Management OAuth Applications . Select the application you want to verify from the Applications list view. Select the Tokens tab. Your token should be displayed in the list of tokens associated with the application you chose. Additional resources If you are a system administrator and have to create or remove tokens for other users, see the revoke and create commands in Token and session management . 4.2.1. Application token functions The refresh and revoke functions associated with tokens, for tokens at the /o/ endpoints can currently only be carried out with application tokens. 4.2.1.1. Refresh an existing access token The following example shows an existing access token with a refresh token provided: { "id": 35, "type": "access_token", ... "user": 1, "token": "omMFLk7UKpB36WN2Qma9H3gbwEBSOc", "refresh_token": "AL0NK9TTpv0qp54dGbC4VUZtsZ9r8z", "application": 6, "expires": "2017-12-06T03:46:17.087022Z", "scope": "read write" } The /o/token/ endpoint is used for refreshing the access token: curl -X POST \ -d "grant_type=refresh_token&refresh_token=AL0NK9TTpv0qp54dGbC4VUZtsZ9r8z" \ -u "gwSPoasWSdNkMDtBN3Hu2WYQpPWCO9SwUEsKK22l:fI6ZpfocHYBGfm1tP92r0yIgCyfRdDQt0Tos9L8a4fNsJjQQMwp9569eIaUBsaVDgt2eiwOGe0bg5m5vCSstClZmtdy359RVx2rQK5YlIWyPlrolpt2LEpVeKXWaiybo" \ http://<gateway>/o/token/ -i Where refresh_token is provided by refresh_token field of the preceding access token. The authentication information is of format <client_id>:<client_secret> , where client_id and client_secret are the corresponding fields of the underlying related application of the access token. Note The special OAuth 2 endpoints only support using the x-www-form-urlencoded Content-type , so as a result, none of the /o/* endpoints accept application/json . On success, a response displays in JSON format containing the new (refreshed) access token with the same scope information as the one: HTTP/1.1 200 OK Server: nginx/1.12.2 Date: Tue, 05 Dec 2017 17:54:06 GMT Content-Type: application/json Content-Length: 169 Connection: keep-alive Content-Language: en Vary: Accept-Language, Cookie Pragma: no-cache Cache-Control: no-store Strict-Transport-Security: max-age=15768000 {"access_token": "NDInWxGJI4iZgqpsreujjbvzCfJqgR", "token_type": "Bearer", "expires_in": 315360000000, "refresh_token": "DqOrmz8bx3srlHkZNKmDpqA86bnQkT", "scope": "read write"} The refresh operation replaces the existing token by deleting the original and then immediately creating a new token with the same scope and related application as the original one. Verify that the new token is present and the old one is deleted in the gateway/api/v1/tokens/ endpoint. 4.2.1.2. Revoke an access token You can revoke an access token by deleting the token in the platform UI, or by using the /o/revoke-token/ endpoint. Revoking an access token by this method is the same as deleting the token resource object, but it enables you to delete a token by providing its token value, and the associated client_id (and client_secret if the application is confidential ). For example: curl -X POST -d "token=rQONsve372fQwuc2pn76k3IHDCYpi7" \ -u "gwSPoasWSdNkMDtBN3Hu2WYQpPWCO9SwUEsKK22l:fI6ZpfocHYBGfm1tP92r0yIgCyfRdDQt0Tos9L8a4fNsJjQQMwp9569eIaUBsaVDgt2eiwOGe0bg5m5vCSstClZmtdy359RVx2rQK5YlIWyPlrolpt2LEpVeKXWaiybo" \ http://<gateway>/o/revoke_token/ -i Note The special OAuth 2 endpoints only support using the x-www-form-urlencoded Content-type , so as a result, none of the /o/* endpoints accept application/json . The Allow External Users to Create Oauth2 Tokens ( ALLOW_OAUTH2_FOR_EXTERNAL_USERS in the API) setting is disabled by default. External users refer to users authenticated externally with a service such as LDAP, or any of the other SSO services. This setting ensures external users cannot create their own tokens. If you enable then disable it, any tokens created by external users in the meantime will still exist, and are not automatically revoked. This setting can be configured from the Settings Platform gateway menu. Alternatively, to revoke OAuth2 tokens, you can use the manage utility, see Revoke oauth2 tokens . On success, a response of 200 OK is displayed. Verify the deletion by checking whether the token is present in the gateway/api/v1/tokens/ endpoint. 4.2.2. Token and session management Ansible Automation Platform supports the following commands for OAuth2 token management: create_oauth2_token revoke_oauth2_tokens cleartokens clearsessions 4.2.2.1. create_oauth2_token Use the following command to create OAuth2 tokens (specify the username for example_user ): USD aap-gateway-manage create_oauth2_token --user example_user New OAuth2 token for example_user: j89ia8OO79te6IAZ97L7E8bMgXCON2 Ensure that you provide a valid user when creating tokens. Otherwise, an error message that you attempted to issue the command without specifying a user, or supplied a username that does not exist, is displayed. 4.2.2.2. revoke_oauth2_tokens Use this command to revoke OAuth2 tokens, both application tokens and personal access tokens (PAT). It revokes all application tokens (but not their associated refresh tokens), and revokes all personal access tokens. However, you can also specify a user for whom to revoke all tokens. To revoke all existing OAuth2 tokens use the following command: USD aap-gateway-manage revoke_oauth2_tokens To revoke all OAuth2 tokens and their refresh tokens use the following command: USD aap-gateway-manage revoke_oauth2_tokens --revoke_refresh To revoke all OAuth2 tokens for the user with id=example_user (specify the username for example_user ): USD aap-gateway-manage revoke_oauth2_tokens --user example_user To revoke all OAuth2 tokens and refresh token for the user with id=example_user : USD aap-gateway-manage revoke_oauth2_tokens --user example_user --revoke_refresh 4.2.2.3. cleartokens Use this command to clear tokens which have already been revoked. For more information, see cleartokens in Django's Oauth Toolkit documentation. 4.2.2.4. clearsessions Use this command to delete all sessions that have expired. For more information, see Clearing the session store in Django's Oauth Toolkit documentation. For more information on OAuth2 token management in the UI, see the Applications .
[ "CORS_ORIGIN_ALLOW_ALL = True CORS_ALLOWED_ORIGIN_REGEXES = [ r\"http://django-oauth-toolkit.herokuapp.com*\", r\"http://www.example.com*\" ]", "{ \"id\": 35, \"type\": \"access_token\", \"user\": 1, \"token\": \"omMFLk7UKpB36WN2Qma9H3gbwEBSOc\", \"refresh_token\": \"AL0NK9TTpv0qp54dGbC4VUZtsZ9r8z\", \"application\": 6, \"expires\": \"2017-12-06T03:46:17.087022Z\", \"scope\": \"read write\" }", "curl -X POST -d \"grant_type=refresh_token&refresh_token=AL0NK9TTpv0qp54dGbC4VUZtsZ9r8z\" -u \"gwSPoasWSdNkMDtBN3Hu2WYQpPWCO9SwUEsKK22l:fI6ZpfocHYBGfm1tP92r0yIgCyfRdDQt0Tos9L8a4fNsJjQQMwp9569eIaUBsaVDgt2eiwOGe0bg5m5vCSstClZmtdy359RVx2rQK5YlIWyPlrolpt2LEpVeKXWaiybo\" http://<gateway>/o/token/ -i", "HTTP/1.1 200 OK Server: nginx/1.12.2 Date: Tue, 05 Dec 2017 17:54:06 GMT Content-Type: application/json Content-Length: 169 Connection: keep-alive Content-Language: en Vary: Accept-Language, Cookie Pragma: no-cache Cache-Control: no-store Strict-Transport-Security: max-age=15768000 {\"access_token\": \"NDInWxGJI4iZgqpsreujjbvzCfJqgR\", \"token_type\": \"Bearer\", \"expires_in\": 315360000000, \"refresh_token\": \"DqOrmz8bx3srlHkZNKmDpqA86bnQkT\", \"scope\": \"read write\"}", "curl -X POST -d \"token=rQONsve372fQwuc2pn76k3IHDCYpi7\" -u \"gwSPoasWSdNkMDtBN3Hu2WYQpPWCO9SwUEsKK22l:fI6ZpfocHYBGfm1tP92r0yIgCyfRdDQt0Tos9L8a4fNsJjQQMwp9569eIaUBsaVDgt2eiwOGe0bg5m5vCSstClZmtdy359RVx2rQK5YlIWyPlrolpt2LEpVeKXWaiybo\" http://<gateway>/o/revoke_token/ -i", "aap-gateway-manage create_oauth2_token --user example_user New OAuth2 token for example_user: j89ia8OO79te6IAZ97L7E8bMgXCON2", "aap-gateway-manage revoke_oauth2_tokens", "aap-gateway-manage revoke_oauth2_tokens --revoke_refresh", "aap-gateway-manage revoke_oauth2_tokens --user example_user", "aap-gateway-manage revoke_oauth2_tokens --user example_user --revoke_refresh" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/access_management_and_authentication/gw-token-based-authentication
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. To propose improvements, open a Jira issue and describe your suggested changes. Provide as much detail as possible to enable us to address your request quickly. Prerequisite You have a Red Hat Customer Portal account. This account enables you to log in to the Red Hat Jira Software instance. If you do not have an account, you will be prompted to create one. Procedure Click the following: Create issue . In the Summary text box, enter a brief description of the issue. In the Description text box, provide the following information: The URL of the page where you found the issue. A detailed description of the issue. You can leave the information in any other fields at their default values. Add a reporter name. Click Create to submit the Jira issue to the documentation team. Thank you for taking the time to provide feedback.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/deploying_and_managing_streams_for_apache_kafka_on_openshift/proc-providing-feedback-on-redhat-documentation
B.21.5. RHSA-2011:0471 - Critical: firefox security update
B.21.5. RHSA-2011:0471 - Critical: firefox security update Updated firefox packages that fix several security issues are now available for Red Hat Enterprise Linux 4, 5, and 6. The Red Hat Security Response Team has rated this update as having critical security impact. Common Vulnerability Scoring System (CVSS) base scores, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. Mozilla Firefox is an open source web browser. XULRunner provides the XUL Runtime environment for Mozilla Firefox. CVE-2011-0080 , CVE-2011-0081 Several flaws were found in the processing of malformed web content. A web page containing malicious content could possibly lead to arbitrary code execution with the privileges of the user running Firefox. CVE-2011-0078 An arbitrary memory write flaw was found in the way Firefox handled out-of-memory conditions. If all memory was consumed when a user visited a malicious web page, it could possibly lead to arbitrary code execution with the privileges of the user running Firefox. CVE-2011-0077 An integer overflow flaw was found in the way Firefox handled the HTML frameset tag. A web page with a frameset tag containing large values for the "rows" and "cols" attributes could trigger this flaw, possibly leading to arbitrary code execution with the privileges of the user running Firefox. CVE-2011-0075 A flaw was found in the way Firefox handled the HTML iframe tag. A web page with an iframe tag containing a specially-crafted source address could trigger this flaw, possibly leading to arbitrary code execution with the privileges of the user running Firefox. CVE-2011-0074 A flaw was found in the way Firefox displayed multiple marquee elements. A malformed HTML document could cause Firefox to execute arbitrary code with the privileges of the user running Firefox. CVE-2011-0073 A flaw was found in the way Firefox handled the nsTreeSelection element. Malformed content could cause Firefox to execute arbitrary code with the privileges of the user running Firefox. CVE-2011-0072 A use-after-free flaw was found in the way Firefox appended frame and iframe elements to a DOM tree when the NoScript add-on was enabled. Malicious HTML content could cause Firefox to execute arbitrary code with the privileges of the user running Firefox. CVE-2011-0071 A directory traversal flaw was found in the Firefox resource:// protocol handler. Malicious content could cause Firefox to access arbitrary files accessible to the user running Firefox. CVE-2011-0070 A double free flaw was found in the way Firefox handled "application/http-index-format" documents. A malformed HTTP response could cause Firefox to execute arbitrary code with the privileges of the user running Firefox. CVE-2011-0069 A flaw was found in the way Firefox handled certain JavaScript cross-domain requests. If malicious content generated a large number of cross-domain JavaScript requests, it could cause Firefox to execute arbitrary code with the privileges of the user running Firefox. CVE-2011-0067 A flaw was found in the way Firefox displayed the autocomplete pop-up. Malicious content could use this flaw to steal form history information. CVE-2011-0066 , CVE-2011-0065 Two use-after-free flaws were found in the Firefox mObserverList and mChannel objects. Malicious content could use these flaws to execute arbitrary code with the privileges of the user running Firefox. CVE-2011-1202 A flaw was found in the Firefox XSLT generate-id() function. This function returned the memory address of an object in memory, which could possibly be used by attackers to bypass address randomization protections. For technical details regarding these flaws, refer to the Mozilla security advisories for Firefox 3.6.17. http://www.mozilla.org/security/known-vulnerabilities/firefox36.html#firefox3.6.17 All Firefox users should upgrade to these updated packages, which contain Firefox version 3.6.17, which corrects these issues. After installing the update, Firefox must be restarted for the changes to take effect.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/rhsa-2011-0471
5.161. libusb1
5.161. libusb1 5.161.1. RHBA-2012:0759 - libusb1 bug fix and enhancement update Updated libusb1 package that fix multiple bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The libusb1 packages provide a way for applications to access USB devices. The libusb1 packages have been upgraded to upstream version 1.0.9, which provides a number of bug fixes and enhancements over the version. In addition, this update adds a new API needed for support of the SPICE (The Simple Protocol for Independent Computing Environments) USB redirection. (BZ# 758094 ) All users of libusb1 are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/libusb1
Chapter 3. User tasks
Chapter 3. User tasks 3.1. Creating applications from installed Operators This guide walks developers through an example of creating applications from an installed Operator using the OpenShift Container Platform web console. 3.1.1. Creating an etcd cluster using an Operator This procedure walks through creating a new etcd cluster using the etcd Operator, managed by Operator Lifecycle Manager (OLM). Prerequisites Access to an OpenShift Container Platform 4.9 cluster. The etcd Operator already installed cluster-wide by an administrator. Procedure Create a new project in the OpenShift Container Platform web console for this procedure. This example uses a project called my-etcd . Navigate to the Operators Installed Operators page. The Operators that have been installed to the cluster by the cluster administrator and are available for use are shown here as a list of cluster service versions (CSVs). CSVs are used to launch and manage the software provided by the Operator. Tip You can get this list from the CLI using: USD oc get csv On the Installed Operators page, click the etcd Operator to view more details and available actions. As shown under Provided APIs , this Operator makes available three new resource types, including one for an etcd Cluster (the EtcdCluster resource). These objects work similar to the built-in native Kubernetes ones, such as Deployment or ReplicaSet , but contain logic specific to managing etcd. Create a new etcd cluster: In the etcd Cluster API box, click Create instance . The screen allows you to make any modifications to the minimal starting template of an EtcdCluster object, such as the size of the cluster. For now, click Create to finalize. This triggers the Operator to start up the pods, services, and other components of the new etcd cluster. Click on the example etcd cluster, then click the Resources tab to see that your project now contains a number of resources created and configured automatically by the Operator. Verify that a Kubernetes service has been created that allows you to access the database from other pods in your project. All users with the edit role in a given project can create, manage, and delete application instances (an etcd cluster, in this example) managed by Operators that have already been created in the project, in a self-service manner, just like a cloud service. If you want to enable additional users with this ability, project administrators can add the role using the following command: USD oc policy add-role-to-user edit <user> -n <target_project> You now have an etcd cluster that will react to failures and rebalance data as pods become unhealthy or are migrated between nodes in the cluster. Most importantly, cluster administrators or developers with proper access can now easily use the database with their applications. 3.2. Installing Operators in your namespace If a cluster administrator has delegated Operator installation permissions to your account, you can install and subscribe an Operator to your namespace in a self-service manner. 3.2.1. Prerequisites A cluster administrator must add certain permissions to your OpenShift Container Platform user account to allow self-service Operator installation to a namespace. See Allowing non-cluster administrators to install Operators for details. 3.2.2. About Operator installation with OperatorHub OperatorHub is a user interface for discovering Operators; it works in conjunction with Operator Lifecycle Manager (OLM), which installs and manages Operators on a cluster. As a user with the proper permissions, you can install an Operator from OperatorHub using the OpenShift Container Platform web console or CLI. During installation, you must determine the following initial settings for the Operator: Installation Mode Choose a specific namespace in which to install the Operator. Update Channel If an Operator is available through multiple channels, you can choose which channel you want to subscribe to. For example, to deploy from the stable channel, if available, select it from the list. Approval Strategy You can choose automatic or manual updates. If you choose automatic updates for an installed Operator, when a new version of that Operator is available in the selected channel, Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without human intervention. If you select manual updates, when a newer version of an Operator is available, OLM creates an update request. As a cluster administrator, you must then manually approve that update request to have the Operator updated to the new version. Understanding OperatorHub 3.2.3. Installing from OperatorHub using the web console You can install and subscribe to an Operator from OperatorHub using the OpenShift Container Platform web console. Prerequisites Access to an OpenShift Container Platform cluster using an account with Operator installation permissions. Procedure Navigate in the web console to the Operators OperatorHub page. Scroll or type a keyword into the Filter by keyword box to find the Operator you want. For example, type advanced to find the Advanced Cluster Management for Kubernetes Operator. You can also filter options by Infrastructure Features . For example, select Disconnected if you want to see Operators that work in disconnected environments, also known as restricted network environments. Select the Operator to display additional information. Note Choosing a Community Operator warns that Red Hat does not certify Community Operators; you must acknowledge the warning before continuing. Read the information about the Operator and click Install . On the Install Operator page: Choose a specific, single namespace in which to install the Operator. The Operator will only watch and be made available for use in this single namespace. Select an Update Channel (if more than one is available). Select Automatic or Manual approval strategy, as described earlier. Click Install to make the Operator available to the selected namespaces on this OpenShift Container Platform cluster. If you selected a Manual approval strategy, the upgrade status of the subscription remains Upgrading until you review and approve the install plan. After approving on the Install Plan page, the subscription upgrade status moves to Up to date . If you selected an Automatic approval strategy, the upgrade status should resolve to Up to date without intervention. After the upgrade status of the subscription is Up to date , select Operators Installed Operators to verify that the cluster service version (CSV) of the installed Operator eventually shows up. The Status should ultimately resolve to InstallSucceeded in the relevant namespace. Note For the All namespaces... installation mode, the status resolves to InstallSucceeded in the openshift-operators namespace, but the status is Copied if you check in other namespaces. If it does not: Check the logs in any pods in the openshift-operators project (or other relevant namespace if A specific namespace... installation mode was selected) on the Workloads Pods page that are reporting issues to troubleshoot further. 3.2.4. Installing from OperatorHub using the CLI Instead of using the OpenShift Container Platform web console, you can install an Operator from OperatorHub using the CLI. Use the oc command to create or update a Subscription object. Prerequisites Access to an OpenShift Container Platform cluster using an account with Operator installation permissions. Install the oc command to your local system. Procedure View the list of Operators available to the cluster from OperatorHub: USD oc get packagemanifests -n openshift-marketplace Example output NAME CATALOG AGE 3scale-operator Red Hat Operators 91m advanced-cluster-management Red Hat Operators 91m amq7-cert-manager Red Hat Operators 91m ... couchbase-enterprise-certified Certified Operators 91m crunchy-postgres-operator Certified Operators 91m mongodb-enterprise Certified Operators 91m ... etcd Community Operators 91m jaeger Community Operators 91m kubefed Community Operators 91m ... Note the catalog for your desired Operator. Inspect your desired Operator to verify its supported install modes and available channels: USD oc describe packagemanifests <operator_name> -n openshift-marketplace An Operator group, defined by an OperatorGroup object, selects target namespaces in which to generate required RBAC access for all Operators in the same namespace as the Operator group. The namespace to which you subscribe the Operator must have an Operator group that matches the install mode of the Operator, either the AllNamespaces or SingleNamespace mode. If the Operator you intend to install uses the AllNamespaces , then the openshift-operators namespace already has an appropriate Operator group in place. However, if the Operator uses the SingleNamespace mode and you do not already have an appropriate Operator group in place, you must create one. Note The web console version of this procedure handles the creation of the OperatorGroup and Subscription objects automatically behind the scenes for you when choosing SingleNamespace mode. Create an OperatorGroup object YAML file, for example operatorgroup.yaml : Example OperatorGroup object apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <operatorgroup_name> namespace: <namespace> spec: targetNamespaces: - <namespace> Create the OperatorGroup object: USD oc apply -f operatorgroup.yaml Create a Subscription object YAML file to subscribe a namespace to an Operator, for example sub.yaml : Example Subscription object apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: <subscription_name> namespace: openshift-operators 1 spec: channel: <channel_name> 2 name: <operator_name> 3 source: redhat-operators 4 sourceNamespace: openshift-marketplace 5 config: env: 6 - name: ARGS value: "-v=10" envFrom: 7 - secretRef: name: license-secret volumes: 8 - name: <volume_name> configMap: name: <configmap_name> volumeMounts: 9 - mountPath: <directory_name> name: <volume_name> tolerations: 10 - operator: "Exists" resources: 11 requests: memory: "64Mi" cpu: "250m" limits: memory: "128Mi" cpu: "500m" nodeSelector: 12 foo: bar 1 For AllNamespaces install mode usage, specify the openshift-operators namespace. Otherwise, specify the relevant single namespace for SingleNamespace install mode usage. 2 Name of the channel to subscribe to. 3 Name of the Operator to subscribe to. 4 Name of the catalog source that provides the Operator. 5 Namespace of the catalog source. Use openshift-marketplace for the default OperatorHub catalog sources. 6 The env parameter defines a list of Environment Variables that must exist in all containers in the pod created by OLM. 7 The envFrom parameter defines a list of sources to populate Environment Variables in the container. 8 The volumes parameter defines a list of Volumes that must exist on the pod created by OLM. 9 The volumeMounts parameter defines a list of VolumeMounts that must exist in all containers in the pod created by OLM. If a volumeMount references a volume that does not exist, OLM fails to deploy the Operator. 10 The tolerations parameter defines a list of Tolerations for the pod created by OLM. 11 The resources parameter defines resource constraints for all the containers in the pod created by OLM. 12 The nodeSelector parameter defines a NodeSelector for the pod created by OLM. Create the Subscription object: USD oc apply -f sub.yaml At this point, OLM is now aware of the selected Operator. A cluster service version (CSV) for the Operator should appear in the target namespace, and APIs provided by the Operator should be available for creation. Additional resources Operator groups Channel names 3.2.5. Installing a specific version of an Operator You can install a specific version of an Operator by setting the cluster service version (CSV) in a Subscription object. Prerequisites Access to an OpenShift Container Platform cluster using an account with Operator installation permissions OpenShift CLI ( oc ) installed Procedure Create a Subscription object YAML file that subscribes a namespace to an Operator with a specific version by setting the startingCSV field. Set the installPlanApproval field to Manual to prevent the Operator from automatically upgrading if a later version exists in the catalog. For example, the following sub.yaml file can be used to install the Red Hat Quay Operator specifically to version 3.4.0: Subscription with a specific starting Operator version apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: quay-operator namespace: quay spec: channel: quay-v3.4 installPlanApproval: Manual 1 name: quay-operator source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: quay-operator.v3.4.0 2 1 Set the approval strategy to Manual in case your specified version is superseded by a later version in the catalog. This plan prevents an automatic upgrade to a later version and requires manual approval before the starting CSV can complete the installation. 2 Set a specific version of an Operator CSV. Create the Subscription object: USD oc apply -f sub.yaml Manually approve the pending install plan to complete the Operator installation. Additional resources Manually approving a pending Operator update
[ "oc get csv", "oc policy add-role-to-user edit <user> -n <target_project>", "oc get packagemanifests -n openshift-marketplace", "NAME CATALOG AGE 3scale-operator Red Hat Operators 91m advanced-cluster-management Red Hat Operators 91m amq7-cert-manager Red Hat Operators 91m couchbase-enterprise-certified Certified Operators 91m crunchy-postgres-operator Certified Operators 91m mongodb-enterprise Certified Operators 91m etcd Community Operators 91m jaeger Community Operators 91m kubefed Community Operators 91m", "oc describe packagemanifests <operator_name> -n openshift-marketplace", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <operatorgroup_name> namespace: <namespace> spec: targetNamespaces: - <namespace>", "oc apply -f operatorgroup.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: <subscription_name> namespace: openshift-operators 1 spec: channel: <channel_name> 2 name: <operator_name> 3 source: redhat-operators 4 sourceNamespace: openshift-marketplace 5 config: env: 6 - name: ARGS value: \"-v=10\" envFrom: 7 - secretRef: name: license-secret volumes: 8 - name: <volume_name> configMap: name: <configmap_name> volumeMounts: 9 - mountPath: <directory_name> name: <volume_name> tolerations: 10 - operator: \"Exists\" resources: 11 requests: memory: \"64Mi\" cpu: \"250m\" limits: memory: \"128Mi\" cpu: \"500m\" nodeSelector: 12 foo: bar", "oc apply -f sub.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: quay-operator namespace: quay spec: channel: quay-v3.4 installPlanApproval: Manual 1 name: quay-operator source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: quay-operator.v3.4.0 2", "oc apply -f sub.yaml" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/operators/user-tasks
Chapter 14. Adding allowed registries to the automation controller image configuration
Chapter 14. Adding allowed registries to the automation controller image configuration Before you can deploy a container image in automation hub, you must add the registry to the allowedRegistries in the automation controller image configuration. To do this you can copy and paste the following code into your automation controller image YAML. Procedure Log in to Red Hat OpenShift Container Platform . Navigate to Home Search . Select the Resources drop-down list and type "Image". Select Image (config,openshift.io/v1) . Click Cluster under the Name heading. Select the YAML tab. Paste in the following under spec value: spec: registrySources: allowedRegistries: - quay.io - registry.redhat.io - image-registry.openshift-image-registry.svc:5000 - <OCP route for your automation hub> Click Save .
[ "spec: registrySources: allowedRegistries: - quay.io - registry.redhat.io - image-registry.openshift-image-registry.svc:5000 - <OCP route for your automation hub>" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/deploying_the_red_hat_ansible_automation_platform_operator_on_openshift_container_platform/aap-add-allowed-registries_using-a-rhsso-operator
Chapter 2. Installing Metrics Store
Chapter 2. Installing Metrics Store Prerequisites Computing resources: 4 CPU cores 30 GB RAM 500 GB SSD disk For the Metrics Store Installer virtual machine: 4 CPU cores 8 GB RAM Note The computing resource requirements are for an all-in-one installation, with a single Metrics Store virtual machine. The all-in-one installation can collect data from up to 50 hosts, each running 20 virtual machines. Operating system: Red Hat Enterprise Linux 7.7 or later Software: Red Hat Virtualization 4.3.5 or later Network configuration: see Configuring networking for Metrics Store virtual machines 2.1. Creating the Metrics Store virtual machines To create the Metrics Store virtual machines, perform the following tasks: Configure the Metrics Store installation. Create the following Metrics Store virtual machines: The Metrics Store Installer virtual machine - a temporary virtual machine for deploying Red Hat OpenShift and services on the Metrics Store virtual machines. One or more Metrics Store virtual machines. Verify the Metrics Store virtual machines. 2.1.1. Configuring the Metrics Store installation Procedure Log in to the Manager machine using SSH. Update the packages: Copy metrics-store-config.yml.example to create metrics-store-config.yml : Edit the parameters in metrics-store-config.yml to match your installation environment, and save the file. The parameters are documented in the file. To set the logical network that is used for the metrics-store-installer and Metrics Store virtual machines, add the following lines to metrics-store-config.yml : On the Manager machine, copy /etc/ovirt-engine-metrics/secure_vars.yaml.example to /etc/ovirt-engine-metrics/secure_vars.yaml : Edit the parameters in /etc/ovirt-engine-metrics/secure_vars.yaml to match the details of your specific environment. Encrypt the secure_vars.yaml file: 2.1.2. Creating Metrics Store virtual machines Procedure Go to the ovirt-engine-metrics directory: Run the ovirt-metrics-store-installation playbook to create the virtual machines: Note To enable verbose mode for debugging, add -vvv to the end of the command, or add '-v' to enable light verbose mode, or add -vvvv to enable connection debugging. For more extensive debugging options, enable debugging through the Ansible playbook as described in Enable debugging via Ansible playbook 2.1.3. Verifying the creation of the virtual machines Procedure Log in to the Administration Portal. Click Compute Virtual Machines to verify that the metrics-store-installer virtual machine and the Metrics Store virtual machines are running. 2.1.4. Changing the default LDAP authentication identity provider (optional) In the standard Metrics Store installation, the allow_all identity provider is configured by default. You can change this default during installation by configuring the openshift_master_identity_providers parameter in the inventory file integ.ini . You can also configure the session options in the OAuth configuration in the integ.ini inventory file. Procedure Locate the integ.ini in the root directory of the metrics-store-installer virtual machine. Follow the instructions for updating the identity provider configuration in Configuring identity providers with Ansible . 2.2. Configuring networking for Metrics Store virtual machines 2.2.1. Configuring DNS resolution for Metrics Store virtual machines Procedure In the metrics-store-config.yml DNS zone parameter, public_hosted_zone should be defined as a wildcard DNS record ( *. example.com ). That wildcard DNS should resolve to the IP address of your master0 virtual machine. Add the hostnames of the Metrics Store virtual machines to your DNS server. 2.2.2. Setting a static MAC address for a Metrics Store virtual machine (optional) Procedure Log in to the Administration Portal. Click Compute Virtual Machines and select a Metrics Store virtual machine. In the Network Interfaces tab, select a NIC and click Edit . Select Custom MAC Address , enter the MAC address, and click OK . Reboot the virtual machine. 2.2.3. Configuring firewall ports The following table describes the firewall settings needed for communication between the ports used by Metrics Store. Table 2.1. Configure the firewall to allow connections to specific ports ID Port(s) Protocol Sources Destinations Purpose MS1 9200 TCP RHV Red Hat Virtualization Hosts RHV Manager Metrics Store VM Transfer data to ElasticSearch. MS2 5601 TCP Kibana user Metrics Store VM Give users access to the Kibana web interface. Note Whether a connection is encrypted or not depends on how you deployed the software. 2.3. Deploying Metrics Store services on Red Hat OpenShift Deploy and verify Red Hat OpenShift, Elasticsearch, Curator (for managing Elasticsearch indices and snapshots), and Kibana on the Metrics Store virtual machines. Procedure Log in to the metrics-store-installer virtual machine. Run the install_okd playbook to deploy Red Hat OpenShift and Metrics Store services to the Metrics Store virtual machines: Note To enable verbose mode for debugging, add -vvv to the end of the command, or add '-v' to enable light verbose mode, or add -vvvv to enable connection debugging. Verify the deployment by logging in to each Metrics Store virtual machine: Log in to the openshift-logging project: Check that the Elasticsearch, Curator, and Kibana pods are running: If Elasticsearch is not running, see Troubleshooting related to ElasticSearch in the OpenShift Container Platform 3.11 documentation. Check the Kibana host name and record it so you can access the Kibana console in Chapter 4, Verifying the Metrics Store installation : Cleanup Log in to the Administration Portal. Click Compute Virtual Machines and delete the metrics-store-installer virtual machine.
[ "yum update", "cp /etc/ovirt-engine-metrics/metrics-store-config.yml.example /etc/ovirt-engine-metrics/config.yml.d/metrics-store-config.yml", "ovirt_template_nics - the following are the default values for setting the logical network used by the metrics_store_installer and the Metrics Store virtual machines ovirt_template_nics: - name: nic1 profile_name: ovirtmgmt interface: virtio", "cp /etc/ovirt-engine-metrics/secure_vars.yaml.example /etc/ovirt-engine-metrics/secure_vars.yaml", "ansible-vault encrypt /etc/ovirt-engine-metrics/secure_vars.yaml", "cd /usr/share/ovirt-engine-metrics", "ANSIBLE_JINJA2_EXTENSIONS=\"jinja2.ext.do\" ./configure_ovirt_machines_for_metrics.sh --playbook=ovirt-metrics-store-installation.yml --ask-vault-pass", "ANSIBLE_CONFIG=\"/usr/share/ansible/openshift-ansible/ansible.cfg\" ANSIBLE_ROLES_PATH=\"/usr/share/ansible/roles/:/usr/share/ansible/openshift-ansible/roles\" ansible-playbook -i integ.ini install_okd.yaml -e @vars.yaml -e @secure_vars.yaml --ask-vault-pass", "oc project openshift-logging", "oc get pods", "oc get routes" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/metrics_store_installation_guide/Installing_metrics_store
7.26. control-center
7.26. control-center 7.26.1. RHBA-2013:0335 - control-center bug fix update Updated control-center packages that fix one bug are now available for Red Hat Enterprise Linux 6. The control-center packages provide various configuration utilities for the GNOME desktop. These utilities allow the user to configure accessibility options, desktop fonts, keyboard and mouse properties, sound setup, desktop theme and background, user interface properties, screen resolution, and other settings. Bug Fix BZ#805069 Prior to this update, the status LEDs on Wacom tablets did not correctly indicate the current mode. With this update, the LEDs now indicate which of the Touch Ring or Touch Strip modes are active. All users of control-center are advised to upgrade to these updated packages, which fix this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/control-center
Providing feedback on Red Hat build of OpenJDK documentation
Providing feedback on Red Hat build of OpenJDK documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/using_jdk_flight_recorder_with_red_hat_build_of_openjdk/proc-providing-feedback-on-redhat-documentation
2.11.2. Use an Interpreter
2.11.2. Use an Interpreter To specify a scripting language to use to execute the script, select the Use an interpreter option and enter the interpreter in the text box beside it. For example, /usr/bin/python2.2 can be specified for a Python script. This option corresponds to using %post --interpreter /usr/bin/python2.2 in your kickstart file.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/post_installation_script-use_an_interpreter
Part I. Migration Planning
Part I. Migration Planning Migration Planning focuses on the shift of the default Red Hat Enterprise Linux desktop environment from GNOME 2, shipped with Red Hat Enterprise Linux 5 and 6, to GNOME 3. One by one, this part of the guide briefly mentions the changes certain components have gone through and describes the new features the components possess. This guide only refers to changes to the GNOME Desktop environment. For changes to the other parts of Red Hat Enterprise Linux 7 refer to: Red Hat Enterprise Linux 7 System Administrator's Guide , for components such as the GRUB 2 boot loader, package management, systemd , or printer configuration. Red Hat Enterprise Linux 7 Migration Planning Guide for an overview of major changes in behavior and compatibility between Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7. The Migration Planning Guide also introduces the tools provided by Red Hat to assist with upgrades to Red Hat Enterprise Linux 7. Red Hat Enterprise Linux 7 Installation Guide for detailed information about installing Red Hat Enterprise Linux 7 and using the Anaconda installer. These documents can be found at http://access.redhat.com/site/documentation/Red_Hat_Enterprise_Linux/ .
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/desktop_migration_and_administration_guide/part-migration_planning
8.225. squid
8.225. squid 8.225.1. RHBA-2014:1446 - squid bug fix update Updated squid packages that fix several bugs are now available for Red Hat Enterprise Linux 6. Squid is a high-performance proxy caching server for web clients, supporting FTP, Gopher, and HTTP data objects. Bug Fixes BZ# 876980 Prior to this update, the /etc/init.d/squid initialization script did not describe the condrestart option, and therefore it did not appear in the Usage message. This bug has been fixed, and condrestart is now displayed correctly in the Usage message. BZ# 998809 Under certain circumstances, the comm_write() function of the squid utility attempted to write to file descriptors that were being closed. Consequently, the squid utility was aborted. With this update, a patch that handles the write attempt has been introduced. As a result, squid is no longer aborted in the aforementioned scenario. BZ# 1011952 Due to a bug in the default /etc/httpd/conf.d/squid.conf configuration file, the squid utility was not allowed to access the CacheManager tool at http://localhost/Squid/cgi-bin/cachemgr.cgi. The bug has been fixed, and squid can now access CacheManager without complications. BZ# 1034616 Under certain circumstances, the squid utility leaked Domain Name System (DNS) queries. Consequently, squid often reached the limit of maximum locks set to 65,535 and terminated unexpectedly. With this update, several changes have been made to prevent leaked queries. Also, the lock limit has been increased to the maximum value of the integer data type. BZ# 1047839 Previously, after receiving a malformed Domain Name System (DNS) response, the squid utility terminated unexpectedly and did not start again. The underlying source code has been modified, and as a result, squid now handles malformed DNS responses without complications. BZ# 1058207 Under certain circumstances, child processes of the squid utility terminated unexpectedly and generated a core file. This bug has been fixed, and squid processes no longer exit abnormally. BZ# 1066368 , BZ# 1089614 Previously, the AuthBasicUserRequest method of the squid utility overrode the default user() methods with its own data. Consequently, a memory leak occurred when using basic authentication, which led to high memory consumption of squid. With this update, the aforementioned override was removed and the memory leak no longer occurs. Users of squid are advised to upgrade to these updated packages, which fix these bugs. After installing this update, the squid service will be restarted automatically.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/squid
2.7. Kernel
2.7. Kernel Thin-provisioning and scalable snapshot capabilities The dm-thinp targets, thin and thin-pool , provide a device mapper device with thin-provisioning and scalable snapshot capabilities. This feature is available as a Technology Preview. Package: kernel-2.6.32-279 kdump/kexec kernel dumping mechanism for IBM System z In Red Hat Enterprise Linux 6.3, the kdump/kexec kernel dumping mechanism is enabled for IBM System z systems as a Technology Preview, in addition to the IBM System z stand-alone and hypervisor dumping mechanism. The auto-reserve threshold is set at 4 GB; therefore, any IBM System z system with more than 4 GB of memory has the kexec/kdump mechanism enabled. Sufficient memory must be available because kdump reserves approximately 128 MB as default. This is especially important when performing an upgrade to Red Hat Enterprise Linux 6.3. Sufficient disk space must also be available for storing the dump in case of a system crash. Kdump is limited to DASD or QETH networks as dump devices until kdump on SCSI disk is supported. The following warning message may appear when kdump is initialized: This message does not impact the dump functionality and can be ignored. You can configure or disable kdump via /etc/kdump.conf , system-config-kdump , or firstboot . Kernel Media support The following features are presented as Technology Previews: The latest upstream video4linux Digital video broadcasting Primarily infrared remote control device support Various webcam support fixes and improvements Package: kernel-2.6.32-279 Remote audit logging The audit package contains the user space utilities for storing and searching the audit records generated by the audit subsystem in the Linux 2.6 kernel. Within the audispd-plugins sub-package is a utility that allows for the transmission of audit events to a remote aggregating machine. This remote audit logging application, audisp-remote , is considered a Technology Preview in Red Hat Enterprise Linux 6. Package: audispd-plugins-2.2-2 Linux (NameSpace) Container [LXC] Linux containers provide a flexible approach to application runtime containment on bare-metal systems without the need to fully virtualize the workload. Red Hat Enterprise Linux 6 provides application level containers to separate and control the application resource usage policies via cgroups and namespaces. This release includes basic management of container life-cycle by allowing creation, editing and deletion of containers via the libvirt API and the virt-manager GUI. Linux Containers are a Technology Preview. Packages: libvirt-0.9.10-21 , virt-manager-0.9.0-14 Diagnostic pulse for the fence_ipmilan agent, BZ# 655764 A diagnostic pulse can now be issued on the IPMI interface using the fence_ipmilan agent. This new Technology Preview is used to force a kernel dump of a host if the host is configured to do so. Note that this feature is not a substitute for the off operation in a production cluster. Package: fence-agents-3.1.5-17
[ "..no such file or directory" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/kernel_tp
Chapter 11. Installing a cluster on AWS China
Chapter 11. Installing a cluster on AWS China In OpenShift Container Platform version 4.14, you can install a cluster to the following Amazon Web Services (AWS) China regions: cn-north-1 (Beijing) cn-northwest-1 (Ningxia) 11.1. Prerequisites You have an Internet Content Provider (ICP) license. You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an AWS account to host the cluster. If you use a firewall, you configured it to allow the sites that your cluster requires access to. Important If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use long-term credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program. 11.2. Installation requirements Red Hat does not publish a Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI) for the AWS China regions. Before you can install the cluster, you must: Upload a custom RHCOS AMI. Manually create the installation configuration file ( install-config.yaml ). Specify the AWS region, and the accompanying custom AMI, in the installation configuration file. You cannot use the OpenShift Container Platform installation program to create the installation configuration file. The installer does not list an AWS region without native support for an RHCOS AMI. 11.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.14, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 11.4. Private clusters You can deploy a private OpenShift Container Platform cluster that does not expose external endpoints. Private clusters are accessible from only an internal network and are not visible to the internet. By default, OpenShift Container Platform is provisioned to use publicly-accessible DNS and endpoints. A private cluster sets the DNS, Ingress Controller, and API server to private when you deploy your cluster. This means that the cluster resources are only accessible from your internal network and are not visible to the internet. Important If the cluster has any public subnets, load balancer services created by administrators might be publicly accessible. To ensure cluster security, verify that these services are explicitly annotated as private. To deploy a private cluster, you must: Use existing networking that meets your requirements. Your cluster resources might be shared between other clusters on the network. Deploy from a machine that has access to: The API services for the cloud to which you provision. The hosts on the network that you provision. The internet to obtain installation media. You can use any machine that meets these access requirements and follows your company's guidelines. For example, this machine can be a bastion host on your cloud network. Note AWS China does not support a VPN connection between the VPC and your network. For more information about the Amazon VPC service in the Beijing and Ningxia regions, see Amazon Virtual Private Cloud in the AWS China documentation. 11.4.1. Private clusters in AWS To create a private cluster on Amazon Web Services (AWS), you must provide an existing private VPC and subnets to host the cluster. The installation program must also be able to resolve the DNS records that the cluster requires. The installation program configures the Ingress Operator and API server for access from only the private network. The cluster still requires access to internet to access the AWS APIs. The following items are not required or created when you install a private cluster: Public subnets Public load balancers, which support public ingress A public Route 53 zone that matches the baseDomain for the cluster The installation program does use the baseDomain that you specify to create a private Route 53 zone and the required records for the cluster. The cluster is configured so that the Operators do not create public records for the cluster and all cluster machines are placed in the private subnets that you specify. 11.4.1.1. Limitations The ability to add public functionality to a private cluster is limited. You cannot make the Kubernetes API endpoints public after installation without taking additional actions, including creating public subnets in the VPC for each availability zone in use, creating a public load balancer, and configuring the control plane security groups to allow traffic from the internet on 6443 (Kubernetes API port). If you use a public Service type load balancer, you must tag a public subnet in each availability zone with kubernetes.io/cluster/<cluster-infra-id>: shared so that AWS can use them to create public load balancers. 11.5. About using a custom VPC In OpenShift Container Platform 4.14, you can deploy a cluster into existing subnets in an existing Amazon Virtual Private Cloud (VPC) in Amazon Web Services (AWS). By deploying OpenShift Container Platform into an existing AWS VPC, you might be able to avoid limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. If you cannot obtain the infrastructure creation permissions that are required to create the VPC yourself, use this installation option. Because the installation program cannot know what other components are also in your existing subnets, it cannot choose subnet CIDRs and so forth on your behalf. You must configure networking for the subnets that you install your cluster to yourself. 11.5.1. Requirements for using your VPC The installation program no longer creates the following components: Internet gateways NAT gateways Subnets Route tables VPCs VPC DHCP options VPC endpoints Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. If you use a custom VPC, you must correctly configure it and its subnets for the installation program and the cluster to use. See Amazon VPC console wizard configurations and Work with VPCs and subnets in the AWS documentation for more information on creating and managing an AWS VPC. The installation program cannot: Subdivide network ranges for the cluster to use. Set route tables for the subnets. Set VPC options like DHCP. You must complete these tasks before you install the cluster. See VPC networking components and Route tables for your VPC for more information on configuring networking in an AWS VPC. Your VPC must meet the following characteristics: The VPC must not use the kubernetes.io/cluster/.*: owned , Name , and openshift.io/cluster tags. The installation program modifies your subnets to add the kubernetes.io/cluster/.*: shared tag, so your subnets must have at least one free tag slot available for it. See Tag Restrictions in the AWS documentation to confirm that the installation program can add a tag to each subnet that you specify. You cannot use a Name tag, because it overlaps with the EC2 Name field and the installation fails. You must enable the enableDnsSupport and enableDnsHostnames attributes in your VPC, so that the cluster can use the Route 53 zones that are attached to the VPC to resolve cluster's internal DNS records. See DNS Support in Your VPC in the AWS documentation. If you prefer to use your own Route 53 hosted private zone, you must associate the existing hosted zone with your VPC prior to installing a cluster. You can define your hosted zone using the platform.aws.hostedZone and platform.aws.hostedZoneRole fields in the install-config.yaml file. You can use a private hosted zone from another account by sharing it with the account where you install the cluster. If you use a private hosted zone from another account, you must use the Passthrough or Manual credentials mode. If you are working in a disconnected environment, you are unable to reach the public IP addresses for EC2, ELB, and S3 endpoints. Depending on the level to which you want to restrict internet traffic during the installation, the following configuration options are available: Option 1: Create VPC endpoints Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: ec2.<aws_region>.amazonaws.com.cn elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com With this option, network traffic remains private between your VPC and the required AWS services. Option 2: Create a proxy without VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy. With this option, internet traffic goes through the proxy to reach the required AWS services. Option 3: Create a proxy with VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy with VPC endpoints. Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: ec2.<aws_region>.amazonaws.com.cn elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com When configuring the proxy in the install-config.yaml file, add these endpoints to the noProxy field. With this option, the proxy prevents the cluster from accessing the internet directly. However, network traffic remains private between your VPC and the required AWS services. Required VPC components You must provide a suitable VPC and subnets that allow communication to your machines. Component AWS type Description VPC AWS::EC2::VPC AWS::EC2::VPCEndpoint You must provide a public VPC for the cluster to use. The VPC uses an endpoint that references the route tables for each subnet to improve communication with the registry that is hosted in S3. Public subnets AWS::EC2::Subnet AWS::EC2::SubnetNetworkAclAssociation Your VPC must have public subnets for between 1 and 3 availability zones and associate them with appropriate Ingress rules. Internet gateway AWS::EC2::InternetGateway AWS::EC2::VPCGatewayAttachment AWS::EC2::RouteTable AWS::EC2::Route AWS::EC2::SubnetRouteTableAssociation AWS::EC2::NatGateway AWS::EC2::EIP You must have a public internet gateway, with public routes, attached to the VPC. In the provided templates, each public subnet has a NAT gateway with an EIP address. These NAT gateways allow cluster resources, like private subnet instances, to reach the internet and are not required for some restricted network or proxy scenarios. Network access control AWS::EC2::NetworkAcl AWS::EC2::NetworkAclEntry You must allow the VPC to access the following ports: Port Reason 80 Inbound HTTP traffic 443 Inbound HTTPS traffic 22 Inbound SSH traffic 1024 - 65535 Inbound ephemeral traffic 0 - 65535 Outbound ephemeral traffic Private subnets AWS::EC2::Subnet AWS::EC2::RouteTable AWS::EC2::SubnetRouteTableAssociation Your VPC can have private subnets. The provided CloudFormation templates can create private subnets for between 1 and 3 availability zones. If you use private subnets, you must provide appropriate routes and tables for them. 11.5.2. VPC validation To ensure that the subnets that you provide are suitable, the installation program confirms the following data: All the subnets that you specify exist. You provide private subnets. The subnet CIDRs belong to the machine CIDR that you specified. You provide subnets for each availability zone. Each availability zone contains no more than one public and one private subnet. If you use a private cluster, provide only a private subnet for each availability zone. Otherwise, provide exactly one public and private subnet for each availability zone. You provide a public subnet for each private subnet availability zone. Machines are not provisioned in availability zones that you do not provide private subnets for. If you destroy a cluster that uses an existing VPC, the VPC is not deleted. When you remove the OpenShift Container Platform cluster from a VPC, the kubernetes.io/cluster/.*: shared tag is removed from the subnets that it used. 11.5.3. Division of permissions Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resource in your clouds than others. For example, you might be able to create application-specific items, like instances, buckets, and load balancers, but not networking-related components such as VPCs, subnets, or ingress rules. The AWS credentials that you use when you create your cluster do not need the networking permissions that are required to make VPCs and core networking components within the VPC, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as ELBs, security groups, S3 buckets, and nodes. 11.5.4. Isolation between clusters If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways: You can install multiple OpenShift Container Platform clusters in the same VPC. ICMP ingress is allowed from the entire network. TCP 22 ingress (SSH) is allowed to the entire network. Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network. Control plane TCP 22623 ingress (MCS) is allowed to the entire network. 11.5.5. AWS security groups By default, the installation program creates and attaches security groups to control plane and compute machines. The rules associated with the default security groups cannot be modified. However, you can apply additional existing AWS security groups, which are associated with your existing VPC, to control plane and compute machines. Applying custom security groups can help you meet the security needs of your organization, in such cases where you need to control the incoming or outgoing traffic of these machines. As part of the installation process, you apply custom security groups by modifying the install-config.yaml file before deploying the cluster. For more information, see "Applying existing AWS security groups to the cluster". 11.6. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 11.7. Uploading a custom RHCOS AMI in AWS If you are deploying to a custom Amazon Web Services (AWS) region, you must upload a custom Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI) that belongs to that region. Prerequisites You configured an AWS account. You created an Amazon S3 bucket with the required IAM service role . You uploaded your RHCOS VMDK file to Amazon S3. The RHCOS VMDK file must be the highest version that is less than or equal to the OpenShift Container Platform version you are installing. You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer . Procedure Export your AWS profile as an environment variable: USD export AWS_PROFILE=<aws_profile> 1 1 The AWS profile name that holds your AWS credentials, like beijingadmin . Export the region to associate with your custom AMI as an environment variable: USD export AWS_DEFAULT_REGION=<aws_region> 1 1 The AWS region, like cn-north-1 . Export the version of RHCOS you uploaded to Amazon S3 as an environment variable: USD export RHCOS_VERSION=<version> 1 1 The RHCOS VMDK version, like 4.14.0 . Export the Amazon S3 bucket name as an environment variable: USD export VMIMPORT_BUCKET_NAME=<s3_bucket_name> Create the containers.json file and define your RHCOS VMDK file: USD cat <<EOF > containers.json { "Description": "rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64", "Format": "vmdk", "UserBucket": { "S3Bucket": "USD{VMIMPORT_BUCKET_NAME}", "S3Key": "rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64.vmdk" } } EOF Import the RHCOS disk as an Amazon EBS snapshot: USD aws ec2 import-snapshot --region USD{AWS_DEFAULT_REGION} \ --description "<description>" \ 1 --disk-container "file://<file_path>/containers.json" 2 1 The description of your RHCOS disk being imported, like rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64 . 2 The file path to the JSON file describing your RHCOS disk. The JSON file should contain your Amazon S3 bucket name and key. Check the status of the image import: USD watch -n 5 aws ec2 describe-import-snapshot-tasks --region USD{AWS_DEFAULT_REGION} Example output { "ImportSnapshotTasks": [ { "Description": "rhcos-4.7.0-x86_64-aws.x86_64", "ImportTaskId": "import-snap-fh6i8uil", "SnapshotTaskDetail": { "Description": "rhcos-4.7.0-x86_64-aws.x86_64", "DiskImageSize": 819056640.0, "Format": "VMDK", "SnapshotId": "snap-06331325870076318", "Status": "completed", "UserBucket": { "S3Bucket": "external-images", "S3Key": "rhcos-4.7.0-x86_64-aws.x86_64.vmdk" } } } ] } Copy the SnapshotId to register the image. Create a custom RHCOS AMI from the RHCOS snapshot: USD aws ec2 register-image \ --region USD{AWS_DEFAULT_REGION} \ --architecture x86_64 \ 1 --description "rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64" \ 2 --ena-support \ --name "rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64" \ 3 --virtualization-type hvm \ --root-device-name '/dev/xvda' \ --block-device-mappings 'DeviceName=/dev/xvda,Ebs={DeleteOnTermination=true,SnapshotId=<snapshot_ID>}' 4 1 The RHCOS VMDK architecture type, like x86_64 , aarch64 , s390x , or ppc64le . 2 The Description from the imported snapshot. 3 The name of the RHCOS AMI. 4 The SnapshotID from the imported snapshot. To learn more about these APIs, see the AWS documentation for importing snapshots and creating EBS-backed AMIs . 11.8. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 11.9. Manually creating the installation configuration file Installing the cluster requires that you manually generate the installation configuration file. Prerequisites You have uploaded a custom RHCOS AMI. You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for AWS 11.9.1. Sample customized install-config.yaml file for AWS You can customize the installation configuration file ( install-config.yaml ) to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. Use it as a resource to enter parameter values into the installation configuration file that you created manually. apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - cn-north-1a - cn-north-1b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - cn-north-1a replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: cn-north-1 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-96c6f8f7 17 18 serviceEndpoints: 19 - name: ec2 url: https://vpce-id.ec2.cn-north-1.vpce.amazonaws.com.cn hostedZone: Z3URY6TWQ91KVV 20 fips: false 21 sshKey: ssh-ed25519 AAAA... 22 publish: Internal 23 pullSecret: '{"auths": ...}' 24 1 12 14 17 24 Required. 2 Optional: Add this parameter to force the Cloud Credential Operator (CCO) to use the specified mode. By default, the CCO uses the root credentials in the kube-system namespace to dynamically try to determine the capabilities of the credentials. For details about CCO modes, see the "About the Cloud Credential Operator" section in the Authentication and authorization guide. 3 8 15 If you do not provide these parameters and values, the installation program provides the default value. 4 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 5 9 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger instance types, such as m4.2xlarge or m5.2xlarge , for your machines if you disable simultaneous multithreading. 6 10 To configure faster storage for etcd, especially for larger clusters, set the storage type as io1 and set iops to 2000 . 7 11 Whether to require the Amazon EC2 Instance Metadata Service v2 (IMDSv2). To require IMDSv2, set the parameter value to Required . To allow the use of both IMDSv1 and IMDSv2, set the parameter value to Optional . If no value is specified, both IMDSv1 and IMDSv2 are allowed. Note The IMDS configuration for control plane machines that is set during cluster installation can only be changed by using the AWS CLI. The IMDS configuration for compute machines can be changed by using compute machine sets. 13 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 16 If you provide your own VPC, specify subnets for each availability zone that your cluster uses. 18 The ID of the AMI used to boot machines for the cluster. If set, the AMI must belong to the same region as the cluster. 19 The AWS service endpoints. Custom endpoints are required when installing to an unknown AWS region. The endpoint URL must use the https protocol and the host must trust the certificate. 20 The ID of your existing Route 53 private hosted zone. Providing an existing hosted zone requires that you supply your own VPC and the hosted zone is already associated with the VPC prior to installing your cluster. If undefined, the installation program creates a new hosted zone. 21 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 22 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 23 How to publish the user-facing endpoints of your cluster. Set publish to Internal to deploy a private cluster, which cannot be accessed from the internet. The default value is External . 11.9.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 11.1. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see RHEL Architectures . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 11.9.3. Tested instance types for AWS The following Amazon Web Services (AWS) instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in "Minimum resource requirements for cluster installation". Example 11.1. Machine types based on 64-bit x86 architecture c4.* c5.* c5a.* i3.* m4.* m5.* m5a.* m6i.* r4.* r5.* r5a.* r6i.* t3.* t3a.* 11.9.4. Tested instance types for AWS on 64-bit ARM infrastructures The following Amazon Web Services (AWS) 64-bit ARM instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS ARM instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in "Minimum resource requirements for cluster installation". Example 11.2. Machine types based on 64-bit ARM architecture c6g.* m6g.* r8g.* 11.9.5. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. If you have added the Amazon EC2 , Elastic Load Balancing , and S3 VPC endpoints to your VPC, you must add these endpoints to the noProxy field. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 11.9.6. Applying existing AWS security groups to the cluster Applying existing AWS security groups to your control plane and compute machines can help you meet the security needs of your organization, in such cases where you need to control the incoming or outgoing traffic of these machines. Prerequisites You have created the security groups in AWS. For more information, see the AWS documentation about working with security groups . The security groups must be associated with the existing VPC that you are deploying the cluster to. The security groups cannot be associated with another VPC. You have an existing install-config.yaml file. Procedure In the install-config.yaml file, edit the compute.platform.aws.additionalSecurityGroupIDs parameter to specify one or more custom security groups for your compute machines. Edit the controlPlane.platform.aws.additionalSecurityGroupIDs parameter to specify one or more custom security groups for your control plane machines. Save the file and reference it when deploying the cluster. Sample install-config.yaml file that specifies custom security groups # ... compute: - hyperthreading: Enabled name: worker platform: aws: additionalSecurityGroupIDs: - sg-1 1 - sg-2 replicas: 3 controlPlane: hyperthreading: Enabled name: master platform: aws: additionalSecurityGroupIDs: - sg-3 - sg-4 replicas: 3 platform: aws: region: us-east-1 subnets: 2 - subnet-1 - subnet-2 - subnet-3 1 Specify the name of the security group as it appears in the Amazon EC2 console, including the sg prefix. 2 Specify subnets for each availability zone that your cluster uses. 11.10. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.14. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.14 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 11.11. Alternatives to storing administrator-level secrets in the kube-system project By default, administrator secrets are stored in the kube-system project. If you configured the credentialsMode parameter in the install-config.yaml file to Manual , you must use one of the following alternatives: To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials . To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Configuring an AWS cluster to use short-term credentials . 11.11.1. Manually creating long-term credentials The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: "*" ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: "*" ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. 11.11.2. Configuring an AWS cluster to use short-term credentials To install a cluster that is configured to use the AWS Security Token Service (STS), you must configure the CCO utility and create the required AWS resources for your cluster. 11.11.2.1. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). You have created an AWS account for the ccoctl utility to use with the following permissions: Example 11.3. Required AWS permissions Required iam permissions iam:CreateOpenIDConnectProvider iam:CreateRole iam:DeleteOpenIDConnectProvider iam:DeleteRole iam:DeleteRolePolicy iam:GetOpenIDConnectProvider iam:GetRole iam:GetUser iam:ListOpenIDConnectProviders iam:ListRolePolicies iam:ListRoles iam:PutRolePolicy iam:TagOpenIDConnectProvider iam:TagRole Required s3 permissions s3:CreateBucket s3:DeleteBucket s3:DeleteObject s3:GetBucketAcl s3:GetBucketTagging s3:GetObject s3:GetObjectAcl s3:GetObjectTagging s3:ListBucket s3:PutBucketAcl s3:PutBucketPolicy s3:PutBucketPublicAccessBlock s3:PutBucketTagging s3:PutObject s3:PutObjectAcl s3:PutObjectTagging Required cloudfront permissions cloudfront:ListCloudFrontOriginAccessIdentities cloudfront:ListDistributions cloudfront:ListTagsForResource If you plan to store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL, the AWS account that runs the ccoctl utility requires the following additional permissions: Example 11.4. Additional permissions for a private S3 bucket with CloudFront cloudfront:CreateCloudFrontOriginAccessIdentity cloudfront:CreateDistribution cloudfront:DeleteCloudFrontOriginAccessIdentity cloudfront:DeleteDistribution cloudfront:GetCloudFrontOriginAccessIdentity cloudfront:GetCloudFrontOriginAccessIdentityConfig cloudfront:GetDistribution cloudfront:TagResource cloudfront:UpdateDistribution Note These additional permissions support the use of the --create-private-s3-bucket option when processing credentials requests with the ccoctl aws create-all command. Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE --file="/usr/bin/ccoctl" -a ~/.pull-secret Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 11.11.2.2. Creating AWS resources with the Cloud Credential Operator utility You have the following options when creating AWS resources: You can use the ccoctl aws create-all command to create the AWS resources automatically. This is the quickest way to create the resources. See Creating AWS resources with a single command . If you need to review the JSON files that the ccoctl tool creates before modifying AWS resources, or if the process the ccoctl tool uses to create AWS resources automatically does not meet the requirements of your organization, you can create the AWS resources individually. See Creating AWS resources individually . 11.11.2.2.1. Creating AWS resources with a single command If the process the ccoctl tool uses to create AWS resources automatically meets the requirements of your organization, you can use the ccoctl aws create-all command to automate the creation of AWS resources. Otherwise, you can create the AWS resources individually. For more information, see "Creating AWS resources individually". Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Note This command might take a few moments to run. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl aws create-all \ --name=<name> \ 1 --region=<aws_region> \ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \ 3 --output-dir=<path_to_ccoctl_output_dir> \ 4 --create-private-s3-bucket 5 1 Specify the name used to tag any cloud resources that are created for tracking. 2 Specify the AWS region in which cloud resources will be created. 3 Specify the directory containing the files for the component CredentialsRequest objects. 4 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 5 Optional: By default, the ccoctl utility stores the OpenID Connect (OIDC) configuration files in a public S3 bucket and uses the S3 URL as the public OIDC endpoint. To store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL instead, use the --create-private-s3-bucket parameter. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml You can verify that the IAM roles are created by querying AWS. For more information, refer to AWS documentation on listing IAM roles. 11.11.2.2.2. Creating AWS resources individually You can use the ccoctl tool to create AWS resources individually. This option might be useful for an organization that shares the responsibility for creating these resources among different users or departments. Otherwise, you can use the ccoctl aws create-all command to create the AWS resources automatically. For more information, see "Creating AWS resources with a single command". Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Some ccoctl commands make AWS API calls to create or modify AWS resources. You can use the --dry-run flag to avoid making API calls. Using this flag creates JSON files on the local file system instead. You can review and modify the JSON files and then apply them with the AWS CLI tool using the --cli-input-json parameters. Prerequisites Extract and prepare the ccoctl binary. Procedure Generate the public and private RSA key files that are used to set up the OpenID Connect provider for the cluster by running the following command: USD ccoctl aws create-key-pair Example output 2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer where serviceaccount-signer.private and serviceaccount-signer.public are the generated key files. This command also creates a private key that the cluster requires during installation in /<path_to_ccoctl_output_dir>/tls/bound-service-account-signing-key.key . Create an OpenID Connect identity provider and S3 bucket on AWS by running the following command: USD ccoctl aws create-identity-provider \ --name=<name> \ 1 --region=<aws_region> \ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3 1 <name> is the name used to tag any cloud resources that are created for tracking. 2 <aws-region> is the AWS region in which cloud resources will be created. 3 <path_to_ccoctl_output_dir> is the path to the public key file that the ccoctl aws create-key-pair command generated. Example output 2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com where openid-configuration is a discovery document and keys.json is a JSON web key set file. This command also creates a YAML configuration file in /<path_to_ccoctl_output_dir>/manifests/cluster-authentication-02-config.yaml . This file sets the issuer URL field for the service account tokens that the cluster generates, so that the AWS IAM identity provider trusts the tokens. Create IAM roles for each component in the cluster: Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl aws create-iam-roles \ --name=<name> \ --region=<aws_region> \ --credentials-requests-dir=<path_to_credentials_requests_directory> \ --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com Note For AWS environments that use alternative IAM API endpoints, such as GovCloud, you must also specify your region with the --region parameter. If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. For each CredentialsRequest object, ccoctl creates an IAM role with a trust policy that is tied to the specified OIDC identity provider, and a permissions policy as defined in each CredentialsRequest object from the OpenShift Container Platform release image. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml You can verify that the IAM roles are created by querying AWS. For more information, refer to AWS documentation on listing IAM roles. 11.11.2.3. Incorporating the Cloud Credential Operator utility manifests To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility ( ccoctl ) created to the correct directories for the installation program. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have configured the Cloud Credential Operator utility ( ccoctl ). You have created the cloud provider resources that are required for your cluster with the ccoctl utility. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Copy the manifests that the ccoctl utility generated to the manifests directory that the installation program created by running the following command: USD cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ Copy the tls directory that contains the private key to the installation directory: USD cp -a /<path_to_ccoctl_output_dir>/tls . 11.12. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Optional: Remove or disable the AdministratorAccess policy from the IAM account that you used to install the cluster. Note The elevated permissions provided by the AdministratorAccess policy are required only during installation. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 11.13. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin /validating-an-installation.adoc 11.14. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. 11.15. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.14, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. See About remote health monitoring for more information about the Telemetry service. 11.16. steps Validating an installation . Customize your cluster . If necessary, you can opt out of remote health reporting . If necessary, you can remove cloud provider credentials .
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "export AWS_PROFILE=<aws_profile> 1", "export AWS_DEFAULT_REGION=<aws_region> 1", "export RHCOS_VERSION=<version> 1", "export VMIMPORT_BUCKET_NAME=<s3_bucket_name>", "cat <<EOF > containers.json { \"Description\": \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\", \"Format\": \"vmdk\", \"UserBucket\": { \"S3Bucket\": \"USD{VMIMPORT_BUCKET_NAME}\", \"S3Key\": \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64.vmdk\" } } EOF", "aws ec2 import-snapshot --region USD{AWS_DEFAULT_REGION} --description \"<description>\" \\ 1 --disk-container \"file://<file_path>/containers.json\" 2", "watch -n 5 aws ec2 describe-import-snapshot-tasks --region USD{AWS_DEFAULT_REGION}", "{ \"ImportSnapshotTasks\": [ { \"Description\": \"rhcos-4.7.0-x86_64-aws.x86_64\", \"ImportTaskId\": \"import-snap-fh6i8uil\", \"SnapshotTaskDetail\": { \"Description\": \"rhcos-4.7.0-x86_64-aws.x86_64\", \"DiskImageSize\": 819056640.0, \"Format\": \"VMDK\", \"SnapshotId\": \"snap-06331325870076318\", \"Status\": \"completed\", \"UserBucket\": { \"S3Bucket\": \"external-images\", \"S3Key\": \"rhcos-4.7.0-x86_64-aws.x86_64.vmdk\" } } } ] }", "aws ec2 register-image --region USD{AWS_DEFAULT_REGION} --architecture x86_64 \\ 1 --description \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\" \\ 2 --ena-support --name \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\" \\ 3 --virtualization-type hvm --root-device-name '/dev/xvda' --block-device-mappings 'DeviceName=/dev/xvda,Ebs={DeleteOnTermination=true,SnapshotId=<snapshot_ID>}' 4", "tar -xvf openshift-install-linux.tar.gz", "mkdir <installation_directory>", "apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - cn-north-1a - cn-north-1b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - cn-north-1a replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: cn-north-1 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-96c6f8f7 17 18 serviceEndpoints: 19 - name: ec2 url: https://vpce-id.ec2.cn-north-1.vpce.amazonaws.com.cn hostedZone: Z3URY6TWQ91KVV 20 fips: false 21 sshKey: ssh-ed25519 AAAA... 22 publish: Internal 23 pullSecret: '{\"auths\": ...}' 24", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "compute: - hyperthreading: Enabled name: worker platform: aws: additionalSecurityGroupIDs: - sg-1 1 - sg-2 replicas: 3 controlPlane: hyperthreading: Enabled name: master platform: aws: additionalSecurityGroupIDs: - sg-3 - sg-4 replicas: 3 platform: aws: region: us-east-1 subnets: 2 - subnet-1 - subnet-2 - subnet-3", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: \"*\"", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: \"*\" secretRef: name: <component_secret> namespace: <component_namespace>", "apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret", "chmod 775 ccoctl", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "ccoctl aws create-all --name=<name> \\ 1 --region=<aws_region> \\ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 3 --output-dir=<path_to_ccoctl_output_dir> \\ 4 --create-private-s3-bucket 5", "ls <path_to_ccoctl_output_dir>/manifests", "cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml", "ccoctl aws create-key-pair", "2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer", "ccoctl aws create-identity-provider --name=<name> \\ 1 --region=<aws_region> \\ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3", "2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "ccoctl aws create-iam-roles --name=<name> --region=<aws_region> --credentials-requests-dir=<path_to_credentials_requests_directory> --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com", "ls <path_to_ccoctl_output_dir>/manifests", "cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/", "cp -a /<path_to_ccoctl_output_dir>/tls .", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "cat <installation_directory>/auth/kubeadmin-password", "oc get routes -n openshift-console | grep 'console-openshift'", "console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_on_aws/installing-aws-china-region
Chapter 9. Satellite host management and monitoring in the web console
Chapter 9. Satellite host management and monitoring in the web console After enabling RHEL web console integration on a Red Hat Satellite Server, you manage many hosts at scale in the web console. Red Hat Satellite is a system management solution for deploying, configuring, and maintaining your systems across physical, virtual, and cloud environments. Satellite provides provisioning, remote management and monitoring of multiple Red Hat Enterprise Linux deployments with a centralized tool. By default, RHEL web console integration is disabled in Red Hat Satellite. To access RHEL web console features for your hosts from within Red Hat Satellite, you must first enable RHEL web console integration on a Red Hat Satellite Server. To enable the RHEL web console on your Satellite Server, enter the following command as root : Additional resources Host management and monitoring by using the RHEL web console in the Managing hosts in Red Hat Satellite guide
[ "satellite-installer --enable-foreman-plugin-remote-execution-cockpit --reset-foreman-plugin-remote-execution-cockpit-ensure" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_systems_using_the_rhel_8_web_console/ref_satellite-host-management-and-monitoring_system-management-using-the-rhel-8-web-console
Appendix C. Understanding the gluster_network_inventory.yml file
Appendix C. Understanding the gluster_network_inventory.yml file C.1. Configuration parameters for creation of gluster network vars he_fqdn FQDN of the hosted engine VM he_admin_password Password for RHV Manager Administration Portal. datacenter_name RHV datacenter name. Usually Red Hat Hyperconverged Infrastructure for Virtualization deployment adds all 3 hosts to Default cluster in Default datacenter. cluster_name RHV cluster name. boot_protocol Whether to use DHCP or static networking. . version (optional) Whether to use IPv4 or IPv6 networking. v4 is the default, and is assumed if this parameter is omitted. The other valid value is v6. Mixed networks are not supported. mtu_value (optional) Specifies the Maximum Transmission Unit for the network, the largest packet or frame size that can be sent in a single transaction. The default value is 1500 . Increasing this to 9000 on networks that support Jumbo frames greatly improves throughput. cluster_nodes host Host's public network FQDN, which is mentioned in Red Hat Virtualization Administration Portal. interface Network interface or the bond, corresponding to storage or backend network. C.2. Example gluster_network_inventory.yml
[ "all: hosts: localhost: vars: he_fqdn: rhv-manager.example.com he_admin_password: xxxxxxxxxx datacenter_name: Default cluster_name: Default boot_protocol: dhcp version: v4 mtu_value: 9000 # For dhcp boot_protocol cluster_nodes: - {host: host1-frontend.example.com , interface: eth1 } - {host: host2-frontend.example.com , interface: eth1 } - {host: host3-frontend.example.com , interface: eth1 }" ]
https://docs.redhat.com/en/documentation/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/deploying_red_hat_hyperconverged_infrastructure_for_virtualization/understanding-the-gluster_network_inventory-yml-file
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Make sure you are logged in to the Jira website. Provide feedback by clicking on this link . Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. If you want to be notified about future updates, please make sure you are assigned as Reporter . Click Create at the bottom of the dialogue.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/8/html/automating_sap_hana_scale-up_system_replication_using_the_rhel_ha_add-on/feedback_automating-sap-hana-scale-up-system-replication
Chapter 2. Example deployment: High availability cluster with Compute and Ceph
Chapter 2. Example deployment: High availability cluster with Compute and Ceph This example scenario shows the architecture, hardware and network specifications, and the undercloud and overcloud configuration files for a high availability deployment with the OpenStack Compute service and Red Hat Ceph Storage. Important This deployment is intended to use as a reference for test environments and is not supported for production environments. Figure 2.1. Example high availability deployment architecture 2.1. Example high availability hardware specifications The example HA deployment uses a specific hardware configuration. You can adjust the CPU, memory, storage, or NICs as needed in your own test deployment. Table 2.1. Physical computers Number of Computers Purpose CPUs Memory Disk Space Power Management NICs 1 undercloud node 4 24 GB 40 GB IPMI 2 (1 external; 1 on provisioning) + 1 IPMI 3 Controller nodes 4 24 GB 40 GB IPMI 3 (2 bonded on overcloud; 1 on provisioning) + 1 IPMI 3 Ceph Storage nodes 4 24 GB 40 GB IPMI 3 (2 bonded on overcloud; 1 on provisioning) + 1 IPMI 2 Compute nodes (add more as needed) 4 24 GB 40 GB IPMI 3 (2 bonded on overcloud; 1 on provisioning) + 1 IPMI 2.2. Example high availability network specifications The example HA deployment uses a specific virtual and physical network configuration. You can adjust the configuration as needed in your own test deployment. Note This example does not include hardware redundancy for the control plane and the provisioning network where the overcloud keystone admin endpoint is configured. For information about planning your high availability networking, see Section 1.3, "Planning high availability networking" . Table 2.2. Physical and virtual networks Physical NICs Purpose VLANs Description eth0 Provisioning network (undercloud) N/A Manages all nodes from director (undercloud) eth1 and eth2 Controller/External (overcloud) N/A Bonded NICs with VLANs External network VLAN 100 Allows access from outside the environment to the project networks, internal API, and OpenStack Horizon Dashboard Internal API VLAN 201 Provides access to the internal API between Compute nodes and Controller nodes Storage access VLAN 202 Connects Compute nodes to storage media Storage management VLAN 203 Manages storage media Project network VLAN 204 Provides project network services to RHOSP 2.3. Example high availability undercloud configuration files The example HA deployment uses the undercloud configuration files instackenv.json , undercloud.conf , and network-environment.yaml . instackenv.json undercloud.conf network-environment.yaml 2.4. Example high availability overcloud configuration files The example HA deployment uses the overcloud configuration files haproxy.cfg , corosync.cfg , and ceph.cfg . /var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg (Controller nodes) This file identifies the services that HAProxy manages. It contains the settings for the services that HAProxy monitors. This file is identical on all Controller nodes. /etc/corosync/corosync.conf file (Controller nodes) This file defines the cluster infrastructure, and is available on all Controller nodes. /etc/ceph/ceph.conf (Ceph nodes) This file contains Ceph high availability settings, including the hostnames and IP addresses of the monitoring hosts. 2.5. Additional resources Deploying Red Hat Ceph Storage and Red Hat OpenStack Platform together with director Chapter 1, Red Hat OpenStack Platform high availability overview and planning
[ "{ \"nodes\": [ { \"pm_password\": \"testpass\", \"memory\": \"24\", \"pm_addr\": \"10.100.0.11\", \"mac\": [ \"2c:c2:60:3b:b3:94\" ], \"pm_type\": \"ipmi\", \"disk\": \"40\", \"arch\": \"x86_64\", \"cpu\": \"1\", \"pm_user\": \"admin\" }, { \"pm_password\": \"testpass\", \"memory\": \"24\", \"pm_addr\": \"10.100.0.12\", \"mac\": [ \"2c:c2:60:51:b7:fb\" ], \"pm_type\": \"ipmi\", \"disk\": \"40\", \"arch\": \"x86_64\", \"cpu\": \"1\", \"pm_user\": \"admin\" }, { \"pm_password\": \"testpass\", \"memory\": \"24\", \"pm_addr\": \"10.100.0.13\", \"mac\": [ \"2c:c2:60:76:ce:a5\" ], \"pm_type\": \"ipmi\", \"disk\": \"40\", \"arch\": \"x86_64\", \"cpu\": \"1\", \"pm_user\": \"admin\" }, { \"pm_password\": \"testpass\", \"memory\": \"24\", \"pm_addr\": \"10.100.0.51\", \"mac\": [ \"2c:c2:60:08:b1:e2\" ], \"pm_type\": \"ipmi\", \"disk\": \"40\", \"arch\": \"x86_64\", \"cpu\": \"1\", \"pm_user\": \"admin\" }, { \"pm_password\": \"testpass\", \"memory\": \"24\", \"pm_addr\": \"10.100.0.52\", \"mac\": [ \"2c:c2:60:20:a1:9e\" ], \"pm_type\": \"ipmi\", \"disk\": \"40\", \"arch\": \"x86_64\", \"cpu\": \"1\", \"pm_user\": \"admin\" }, { \"pm_password\": \"testpass\", \"memory\": \"24\", \"pm_addr\": \"10.100.0.53\", \"mac\": [ \"2c:c2:60:58:10:33\" ], \"pm_type\": \"ipmi\", \"disk\": \"40\", \"arch\": \"x86_64\", \"cpu\": \"1\", \"pm_user\": \"admin\" }, { \"pm_password\": \"testpass\", \"memory\": \"24\", \"pm_addr\": \"10.100.0.101\", \"mac\": [ \"2c:c2:60:31:a9:55\" ], \"pm_type\": \"ipmi\", \"disk\": \"40\", \"arch\": \"x86_64\", \"cpu\": \"2\", \"pm_user\": \"admin\" }, { \"pm_password\": \"testpass\", \"memory\": \"24\", \"pm_addr\": \"10.100.0.102\", \"mac\": [ \"2c:c2:60:0d:e7:d1\" ], \"pm_type\": \"ipmi\", \"disk\": \"40\", \"arch\": \"x86_64\", \"cpu\": \"2\", \"pm_user\": \"admin\" } ], \"overcloud\": {\"password\": \"7adbbbeedc5b7a07ba1917e1b3b228334f9a2d4e\", \"endpoint\": \"http://192.168.1.150:5000/v2.0/\" } }", "[DEFAULT] image_path = /home/stack/images local_ip = 10.200.0.1/24 undercloud_public_vip = 10.200.0.2 undercloud_admin_vip = 10.200.0.3 undercloud_service_certificate = /etc/pki/instack-certs/undercloud.pem local_interface = eth0 masquerade_network = 10.200.0.0/24 dhcp_start = 10.200.0.5 dhcp_end = 10.200.0.24 network_cidr = 10.200.0.0/24 network_gateway = 10.200.0.1 #discovery_interface = br-ctlplane discovery_iprange = 10.200.0.150,10.200.0.200 discovery_runbench = 1 undercloud_admin_password = testpass", "resource_registry: OS::TripleO::BlockStorage::Net::SoftwareConfig: /home/stack/templates/nic-configs/cinder-storage.yaml OS::TripleO::Compute::Net::SoftwareConfig: /home/stack/templates/nic-configs/compute.yaml OS::TripleO::Controller::Net::SoftwareConfig: /home/stack/templates/nic-configs/controller.yaml OS::TripleO::ObjectStorage::Net::SoftwareConfig: /home/stack/templates/nic-configs/swift-storage.yaml OS::TripleO::CephStorage::Net::SoftwareConfig: /home/stack/templates/nic-configs/ceph-storage.yaml parameter_defaults: InternalApiNetCidr: 172.16.0.0/24 TenantNetCidr: 172.17.0.0/24 StorageNetCidr: 172.18.0.0/24 StorageMgmtNetCidr: 172.19.0.0/24 ExternalNetCidr: 192.168.1.0/24 InternalApiAllocationPools: [{ start : 172.16.0.10 , end : 172.16.0.200 }] TenantAllocationPools: [{ start : 172.17.0.10 , end : 172.17.0.200 }] StorageAllocationPools: [{ start : 172.18.0.10 , end : 172.18.0.200 }] StorageMgmtAllocationPools: [{ start : 172.19.0.10 , end : 172.19.0.200 }] # Leave room for floating IPs in the External allocation pool ExternalAllocationPools: [{ start : 192.168.1.150 , end : 192.168.1.199 }] InternalApiNetworkVlanID: 201 StorageNetworkVlanID: 202 StorageMgmtNetworkVlanID: 203 TenantNetworkVlanID: 204 ExternalNetworkVlanID: 100 # Set to the router gateway on the external network ExternalInterfaceDefaultRoute: 192.168.1.1 # Set to \"br-ex\" if using floating IPs on native VLAN on bridge br-ex NeutronExternalNetworkBridge: \"''\" # Customize bonding options if required BondInterfaceOvsOptions: \"bond_mode=active-backup lacp=off other_config:bond-miimon-interval=100\"", "This file is managed by Puppet global daemon group haproxy log /dev/log local0 maxconn 20480 pidfile /var/run/haproxy.pid ssl-default-bind-ciphers !SSLv2:kEECDH:kRSA:kEDH:kPSK:+3DES:!aNULL:!eNULL:!MD5:!EXP:!RC4:!SEED:!IDEA:!DES ssl-default-bind-options no-sslv3 stats socket /var/lib/haproxy/stats mode 600 level user stats timeout 2m user haproxy defaults log global maxconn 4096 mode tcp retries 3 timeout http-request 10s timeout queue 2m timeout connect 10s timeout client 2m timeout server 2m timeout check 10s listen aodh bind 192.168.1.150:8042 transparent bind 172.16.0.10:8042 transparent mode http http-request set-header X-Forwarded-Proto https if { ssl_fc } http-request set-header X-Forwarded-Proto http if !{ ssl_fc } option httpchk server overcloud-controller-0.internalapi.localdomain 172.16.0.13:8042 check fall 5 inter 2000 rise 2 server overcloud-controller-1.internalapi.localdomain 172.16.0.14:8042 check fall 5 inter 2000 rise 2 server overcloud-controller-2.internalapi.localdomain 172.16.0.15:8042 check fall 5 inter 2000 rise 2 listen cinder bind 192.168.1.150:8776 transparent bind 172.16.0.10:8776 transparent mode http http-request set-header X-Forwarded-Proto https if { ssl_fc } http-request set-header X-Forwarded-Proto http if !{ ssl_fc } option httpchk server overcloud-controller-0.internalapi.localdomain 172.16.0.13:8776 check fall 5 inter 2000 rise 2 server overcloud-controller-1.internalapi.localdomain 172.16.0.14:8776 check fall 5 inter 2000 rise 2 server overcloud-controller-2.internalapi.localdomain 172.16.0.15:8776 check fall 5 inter 2000 rise 2 listen glance_api bind 192.168.1.150:9292 transparent bind 172.18.0.10:9292 transparent mode http http-request set-header X-Forwarded-Proto https if { ssl_fc } http-request set-header X-Forwarded-Proto http if !{ ssl_fc } option httpchk GET /healthcheck server overcloud-controller-0.internalapi.localdomain 172.18.0.17:9292 check fall 5 inter 2000 rise 2 server overcloud-controller-1.internalapi.localdomain 172.18.0.15:9292 check fall 5 inter 2000 rise 2 server overcloud-controller-2.internalapi.localdomain 172.18.0.16:9292 check fall 5 inter 2000 rise 2 listen gnocchi bind 192.168.1.150:8041 transparent bind 172.16.0.10:8041 transparent mode http http-request set-header X-Forwarded-Proto https if { ssl_fc } http-request set-header X-Forwarded-Proto http if !{ ssl_fc } option httpchk server overcloud-controller-0.internalapi.localdomain 172.16.0.13:8041 check fall 5 inter 2000 rise 2 server overcloud-controller-1.internalapi.localdomain 172.16.0.14:8041 check fall 5 inter 2000 rise 2 server overcloud-controller-2.internalapi.localdomain 172.16.0.15:8041 check fall 5 inter 2000 rise 2 listen haproxy.stats bind 10.200.0.6:1993 transparent mode http stats enable stats uri / stats auth admin:PnDD32EzdVCf73CpjHhFGHZdV listen heat_api bind 192.168.1.150:8004 transparent bind 172.16.0.10:8004 transparent mode http http-request set-header X-Forwarded-Proto https if { ssl_fc } http-request set-header X-Forwarded-Proto http if !{ ssl_fc } option httpchk timeout client 10m timeout server 10m server overcloud-controller-0.internalapi.localdomain 172.16.0.13:8004 check fall 5 inter 2000 rise 2 server overcloud-controller-1.internalapi.localdomain 172.16.0.14:8004 check fall 5 inter 2000 rise 2 server overcloud-controller-2.internalapi.localdomain 172.16.0.15:8004 check fall 5 inter 2000 rise 2 listen heat_cfn bind 192.168.1.150:8000 transparent bind 172.16.0.10:8000 transparent mode http http-request set-header X-Forwarded-Proto https if { ssl_fc } http-request set-header X-Forwarded-Proto http if !{ ssl_fc } option httpchk timeout client 10m timeout server 10m server overcloud-controller-0.internalapi.localdomain 172.16.0.13:8000 check fall 5 inter 2000 rise 2 server overcloud-controller-1.internalapi.localdomain 172.16.0.14:8000 check fall 5 inter 2000 rise 2 server overcloud-controller-2.internalapi.localdomain 172.16.0.15:8000 check fall 5 inter 2000 rise 2 listen horizon bind 192.168.1.150:80 transparent bind 172.16.0.10:80 transparent mode http cookie SERVERID insert indirect nocache option forwardfor option httpchk server overcloud-controller-0.internalapi.localdomain 172.16.0.13:80 check cookie overcloud-controller-0 fall 5 inter 2000 rise 2 server overcloud-controller-1.internalapi.localdomain 172.16.0.14:80 check cookie overcloud-controller-0 fall 5 inter 2000 rise 2 server overcloud-controller-2.internalapi.localdomain 172.16.0.15:80 check cookie overcloud-controller-0 fall 5 inter 2000 rise 2 listen keystone_admin bind 192.168.24.15:35357 transparent mode http http-request set-header X-Forwarded-Proto https if { ssl_fc } http-request set-header X-Forwarded-Proto http if !{ ssl_fc } option httpchk GET /v3 server overcloud-controller-0.ctlplane.localdomain 192.168.24.9:35357 check fall 5 inter 2000 rise 2 server overcloud-controller-1.ctlplane.localdomain 192.168.24.8:35357 check fall 5 inter 2000 rise 2 server overcloud-controller-2.ctlplane.localdomain 192.168.24.18:35357 check fall 5 inter 2000 rise 2 listen keystone_public bind 192.168.1.150:5000 transparent bind 172.16.0.10:5000 transparent mode http http-request set-header X-Forwarded-Proto https if { ssl_fc } http-request set-header X-Forwarded-Proto http if !{ ssl_fc } option httpchk GET /v3 server overcloud-controller-0.internalapi.localdomain 172.16.0.13:5000 check fall 5 inter 2000 rise 2 server overcloud-controller-1.internalapi.localdomain 172.16.0.14:5000 check fall 5 inter 2000 rise 2 server overcloud-controller-2.internalapi.localdomain 172.16.0.15:5000 check fall 5 inter 2000 rise 2 listen mysql bind 172.16.0.10:3306 transparent option tcpka option httpchk stick on dst stick-table type ip size 1000 timeout client 90m timeout server 90m server overcloud-controller-0.internalapi.localdomain 172.16.0.13:3306 backup check inter 1s on-marked-down shutdown-sessions port 9200 server overcloud-controller-1.internalapi.localdomain 172.16.0.14:3306 backup check inter 1s on-marked-down shutdown-sessions port 9200 server overcloud-controller-2.internalapi.localdomain 172.16.0.15:3306 backup check inter 1s on-marked-down shutdown-sessions port 9200 listen neutron bind 192.168.1.150:9696 transparent bind 172.16.0.10:9696 transparent mode http http-request set-header X-Forwarded-Proto https if { ssl_fc } http-request set-header X-Forwarded-Proto http if !{ ssl_fc } option httpchk server overcloud-controller-0.internalapi.localdomain 172.16.0.13:9696 check fall 5 inter 2000 rise 2 server overcloud-controller-1.internalapi.localdomain 172.16.0.14:9696 check fall 5 inter 2000 rise 2 server overcloud-controller-2.internalapi.localdomain 172.16.0.15:9696 check fall 5 inter 2000 rise 2 listen nova_metadata bind 172.16.0.10:8775 transparent option httpchk server overcloud-controller-0.internalapi.localdomain 172.16.0.13:8775 check fall 5 inter 2000 rise 2 server overcloud-controller-1.internalapi.localdomain 172.16.0.14:8775 check fall 5 inter 2000 rise 2 server overcloud-controller-2.internalapi.localdomain 172.16.0.15:8775 check fall 5 inter 2000 rise 2 listen nova_novncproxy bind 192.168.1.150:6080 transparent bind 172.16.0.10:6080 transparent balance source http-request set-header X-Forwarded-Proto https if { ssl_fc } http-request set-header X-Forwarded-Proto http if !{ ssl_fc } option tcpka timeout tunnel 1h server overcloud-controller-0.internalapi.localdomain 172.16.0.13:6080 check fall 5 inter 2000 rise 2 server overcloud-controller-1.internalapi.localdomain 172.16.0.14:6080 check fall 5 inter 2000 rise 2 server overcloud-controller-2.internalapi.localdomain 172.16.0.15:6080 check fall 5 inter 2000 rise 2 listen nova_osapi bind 192.168.1.150:8774 transparent bind 172.16.0.10:8774 transparent mode http http-request set-header X-Forwarded-Proto https if { ssl_fc } http-request set-header X-Forwarded-Proto http if !{ ssl_fc } option httpchk server overcloud-controller-0.internalapi.localdomain 172.16.0.13:8774 check fall 5 inter 2000 rise 2 server overcloud-controller-1.internalapi.localdomain 172.16.0.14:8774 check fall 5 inter 2000 rise 2 server overcloud-controller-2.internalapi.localdomain 172.16.0.15:8774 check fall 5 inter 2000 rise 2 listen nova_placement bind 192.168.1.150:8778 transparent bind 172.16.0.10:8778 transparent mode http http-request set-header X-Forwarded-Proto https if { ssl_fc } http-request set-header X-Forwarded-Proto http if !{ ssl_fc } option httpchk server overcloud-controller-0.internalapi.localdomain 172.16.0.13:8778 check fall 5 inter 2000 rise 2 server overcloud-controller-1.internalapi.localdomain 172.16.0.14:8778 check fall 5 inter 2000 rise 2 server overcloud-controller-2.internalapi.localdomain 172.16.0.15:8778 check fall 5 inter 2000 rise 2 listen panko bind 192.168.1.150:8977 transparent bind 172.16.0.10:8977 transparent http-request set-header X-Forwarded-Proto https if { ssl_fc } http-request set-header X-Forwarded-Proto http if !{ ssl_fc } option httpchk server overcloud-controller-0.internalapi.localdomain 172.16.0.13:8977 check fall 5 inter 2000 rise 2 server overcloud-controller-1.internalapi.localdomain 172.16.0.14:8977 check fall 5 inter 2000 rise 2 server overcloud-controller-2.internalapi.localdomain 172.16.0.15:8977 check fall 5 inter 2000 rise 2 listen redis bind 172.16.0.13:6379 transparent balance first option tcp-check tcp-check send AUTH\\ V2EgUh2pvkr8VzU6yuE4XHsr9\\r\\n tcp-check send PING\\r\\n tcp-check expect string +PONG tcp-check send info\\ replication\\r\\n tcp-check expect string role:master tcp-check send QUIT\\r\\n tcp-check expect string +OK server overcloud-controller-0.internalapi.localdomain 172.16.0.13:6379 check fall 5 inter 2000 rise 2 server overcloud-controller-1.internalapi.localdomain 172.16.0.14:6379 check fall 5 inter 2000 rise 2 server overcloud-controller-2.internalapi.localdomain 172.16.0.15:6379 check fall 5 inter 2000 rise 2 listen swift_proxy_server bind 192.168.1.150:8080 transparent bind 172.18.0.10:8080 transparent option httpchk GET /healthcheck timeout client 2m timeout server 2m server overcloud-controller-0.storage.localdomain 172.18.0.17:8080 check fall 5 inter 2000 rise 2 server overcloud-controller-1.storage.localdomain 172.18.0.15:8080 check fall 5 inter 2000 rise 2 server overcloud-controller-2.storage.localdomain 172.18.0.16:8080 check fall 5 inter 2000 rise 2", "totem { version: 2 cluster_name: tripleo_cluster transport: udpu token: 10000 } nodelist { node { ring0_addr: overcloud-controller-0 nodeid: 1 } node { ring0_addr: overcloud-controller-1 nodeid: 2 } node { ring0_addr: overcloud-controller-2 nodeid: 3 } } quorum { provider: corosync_votequorum } logging { to_logfile: yes logfile: /var/log/cluster/corosync.log to_syslog: yes }", "[global] osd_pool_default_pgp_num = 128 auth_service_required = cephx mon_initial_members = overcloud-controller-0 , overcloud-controller-1 , overcloud-controller-2 fsid = 8c835acc-6838-11e5-bb96-2cc260178a92 cluster_network = 172.19.0.11/24 auth_supported = cephx auth_cluster_required = cephx mon_host = 172.18.0.17,172.18.0.15,172.18.0.16 auth_client_required = cephx osd_pool_default_size = 3 osd_pool_default_pg_num = 128 public_network = 172.18.0.17/24" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/managing_high_availability_services/assembly_example-ha-deployment_rhosp
function::thread_indent
function::thread_indent Name function::thread_indent - returns an amount of space with the current task information Synopsis Arguments delta the amount of space added/removed for each call Description This function returns a string with appropriate indentation for a thread. Call it with a small positive or matching negative delta. If this is the real outermost, initial level of indentation, then the function resets the relative timestamp base to zero. The timestamp is as per provided by the __indent_timestamp function, which by default measures microseconds.
[ "thread_indent:string(delta:long)" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-thread-indent
Chapter 1. Red Hat Software Collections 3.7
Chapter 1. Red Hat Software Collections 3.7 This chapter serves as an overview of the Red Hat Software Collections 3.7 content set. It provides a list of components and their descriptions, sums up changes in this version, documents relevant compatibility information, and lists known issues. 1.1. About Red Hat Software Collections For certain applications, more recent versions of some software components are often needed in order to use their latest new features. Red Hat Software Collections is a Red Hat offering that provides a set of dynamic programming languages, database servers, and various related packages that are either more recent than their equivalent versions included in the base Red Hat Enterprise Linux system, or are available for this system for the first time. Red Hat Software Collections 3.7 is available for Red Hat Enterprise Linux 7. For a complete list of components that are distributed as part of Red Hat Software Collections and a brief summary of their features, see Section 1.2, "Main Features" . Red Hat Software Collections does not replace the default system tools provided with Red Hat Enterprise Linux 7. Instead, a parallel set of tools is installed in the /opt/ directory and can be optionally enabled per application by the user using the supplied scl utility. The default versions of Perl or PostgreSQL, for example, remain those provided by the base Red Hat Enterprise Linux system. Note In Red Hat Enterprise Linux 8, similar components are provided as Application Streams . All Red Hat Software Collections components are fully supported under Red Hat Enterprise Linux Subscription Level Agreements, are functionally complete, and are intended for production use. Important bug fix and security errata are issued to Red Hat Software Collections subscribers in a similar manner to Red Hat Enterprise Linux for at least two years from the release of each major version. In each major release stream, each version of a selected component remains backward compatible. For detailed information about length of support for individual components, refer to the Red Hat Software Collections Product Life Cycle document. 1.1.1. Red Hat Developer Toolset Red Hat Developer Toolset is a part of Red Hat Software Collections, included as a separate Software Collection. For more information about Red Hat Developer Toolset, refer to the Red Hat Developer Toolset Release Notes and the Red Hat Developer Toolset User Guide . 1.2. Main Features Table 1.1, "Red Hat Software Collections Components" lists components that are supported at the time of the Red Hat Software Collections 3.7 release. All Software Collections are currently supported only on Red Hat Enterprise Linux 7. Table 1.1. Red Hat Software Collections Components Component Software Collection Description Red Hat Developer Toolset 10.1 devtoolset-10 Red Hat Developer Toolset is designed for developers working on the Red Hat Enterprise Linux platform. It provides current versions of the GNU Compiler Collection , GNU Debugger , and other development, debugging, and performance monitoring tools. For a complete list of components, see the Red Hat Developer Toolset Components table in the Red Hat Developer Toolset User Guide . Perl 5.30.1 rh-perl530 A release of Perl, a high-level programming language that is commonly used for system administration utilities and web programming. The rh-perl530 Software Collection provides additional utilities, scripts, and database connectors for MySQL , PostgreSQL , and SQLite . It includes the DateTime Perl module and the mod_perl Apache httpd module, which is supported only with the httpd24 Software Collection. Additionally, it provides the cpanm utility for easy installation of CPAN modules, the LWP::UserAgent module for communicating with the HTTP servers, and the LWP::Protocol::https module for securing the communication. The rh-perl530 packaging is aligned with upstream; the perl530-perl package installs also core modules, while the interpreter is provided by the perl-interpreter package. PHP 7.3.20 rh-php73 A release of PHP 7.3 with PEAR 1.10.9, APCu 5.1.17, and the Xdebug extension. Python 2.7.18 python27 A release of Python 2.7 with a number of additional utilities. This Python version provides various features and enhancements, including an ordered dictionary type, faster I/O operations, and improved forward compatibility with Python 3. The python27 Software Collections contains the Python 2.7.13 interpreter , a set of extension libraries useful for programming web applications and mod_wsgi (only supported with the httpd24 Software Collection), MySQL and PostgreSQL database connectors, and numpy and scipy . Python 3.8.6 rh-python38 The rh-python38 Software Collection contains Python 3.8, which introduces new Python modules, such as contextvars , dataclasses , or importlib.resources , new language features, improved developer experience, and performance improvements . In addition, a set of popular extension libraries is provided, including mod_wsgi (supported only together with the httpd24 Software Collection), numpy , scipy , and the psycopg2 PostgreSQL database connector. Ruby 2.6.7 rh-ruby26 A release of Ruby 2.6. This version provides multiple performance improvements and new features, such as endless ranges, the Binding#source_location method, and the USDSAFE process global state . Ruby 2.6.0 maintains source-level backward compatibility with Ruby 2.5. Ruby 2.7.3 rh-ruby27 A release of Ruby 2.7. This version provides multiple performance improvements and new features, such as Compaction GC or command-line interface for the LALR(1) parser generator, and an enhancement to REPL. Ruby 2.7 maintains source-level backward compatibility with Ruby 2.6. Ruby 3.0.1 rh-ruby30 A release of Ruby 3.0. This version provides multiple performance improvements and new features, such as Ractor , Fiber Scheduler and the RBS language . Ruby 3.0 maintains source-level backward compatibility with Ruby 2.7. MariaDB 10.3.27 rh-mariadb103 A release of MariaDB, an alternative to MySQL for users of Red Hat Enterprise Linux. For all practical purposes, MySQL is binary compatible with MariaDB and can be replaced with it without any data conversions. This version introduces system-versioned tables, invisible columns, a new instant ADD COLUMN operation for InnoDB , and a JDBC connector for MariaDB and MySQL . MariaDB 10.5.9 rh-mariadb105 A release of MariaDB, an alternative to MySQL for users of Red Hat Enterprise Linux. For all practical purposes, MySQL is binary compatible with MariaDB and can be replaced with it without any data conversions. This version includes various new features, MariaDB Galera Cluster upgraded to version 4, and PAM plug-in version 2.0 . MySQL 8.0.21 rh-mysql80 A release of the MySQL server, which introduces a number of new security and account management features and enhancements. PostgreSQL 10.15 rh-postgresql10 A release of PostgreSQL, which includes a significant performance improvement and a number of new features, such as logical replication using the publish and subscribe keywords, or stronger password authentication based on the SCRAM-SHA-256 mechanism . PostgreSQL 12.5 rh-postgresql12 A release of PostgreSQL, which provides the pgaudit extension, various enhancements to partitioning and parallelism, support for the SQL/JSON path language, and performance improvements. PostgreSQL 13.2 rh-postgresql13 A release of PostgreSQL, which enables improved query planning and introduces various performance improvements and two new packages, pg_repack and plpython3 . Node.js 12.21.0 rh-nodejs12 A release of Node.js with V8 engine version 7.6, support for ES6 modules, and improved support for native modules. Node.js 14.16.0 rh-nodejs14 A release of Node.js with V8 version 8.3, a new experimental WebAssembly System Interface (WASI), and a new experimental Async Local Storage API. nginx 1.16.1 rh-nginx116 A release of nginx, a web and proxy server with a focus on high concurrency, performance, and low memory usage. This version introduces numerous updates related to SSL, several new directives and parameters, and various enhancements. nginx 1.18.0 rh-nginx118 A release of nginx, a web and proxy server with a focus on high concurrency, performance, and low memory usage. This version introduces enhancements to HTTP request rate and connection limiting, and a new auth_delay directive . In addition, support for new variables has been added to multiple directives. Apache httpd 2.4.34 httpd24 A release of the Apache HTTP Server (httpd), including a high performance event-based processing model, enhanced SSL module and FastCGI support . The mod_auth_kerb , mod_auth_mellon , and ModSecurity modules are also included. Varnish Cache 6.0.6 rh-varnish6 A release of Varnish Cache, a high-performance HTTP reverse proxy. This version includes support for Unix Domain Sockets (both for clients and for back-end servers), new level of the VCL language ( vcl 4.1 ), and improved HTTP/2 support . Maven 3.6.1 rh-maven36 A release of Maven, a software project management and comprehension tool. This release provides various enhancements and bug fixes. Git 2.27.0 rh-git227 A release of Git, a distributed revision control system with a decentralized architecture. As opposed to centralized version control systems with a client-server model, Git ensures that each working copy of a Git repository is its exact copy with complete revision history. This version introduces numerous enhancements; for example, the git checkout command split into git switch and git restore , and changed behavior of the git rebase command . In addition, Git Large File Storage (LFS) has been updated to version 2.11.0. Redis 5.0.5 rh-redis5 A release of Redis 5.0, a persistent key-value database . Redis now provides redis-trib , a cluster management tool . HAProxy 1.8.24 rh-haproxy18 A release of HAProxy 1.8, a reliable, high-performance network load balancer for TCP and HTTP-based applications. JDK Mission Control 8.0.0 rh-jmc This Software Collection includes JDK Mission Control (JMC) , a powerful profiler for HotSpot JVMs. JMC provides an advanced set of tools for efficient and detailed analysis of extensive data collected by the JDK Flight Recorder. JMC requires JDK version 11 or later to run. Target Java applications must run with at least OpenJDK version 8 so that JMC can access JDK Flight Recorder features. The rh-jmc Software Collection requires the rh-maven36 Software Collection. Previously released Software Collections remain available in the same distribution channels. All Software Collections, including retired components, are listed in the Table 1.2, "All Available Software Collections" . Software Collections that are no longer supported are marked with an asterisk ( * ). See the Red Hat Software Collections Product Life Cycle document for information on the length of support for individual components. For detailed information regarding previously released components, refer to the Release Notes for earlier versions of Red Hat Software Collections. Table 1.2. All Available Software Collections Component Software Collection Availability Architectures supported on RHEL7 Components New in Red Hat Software Collections 3.7 MariaDB 10.5.9 rh-mariadb105 RHEL7 x86_64, s390x, ppc64le PostgreSQL 13.2 rh-postgresql13 RHEL7 x86_64, s390x, ppc64le Ruby 3.0.1 rh-ruby30 RHEL7 x86_64, s390x, ppc64le Table 1.2. All Available Software Collections Components Updated in Red Hat Software Collections 3.7 Red Hat Developer Toolset 10.1 devtoolset-10 RHEL7 x86_64, s390x, ppc64, ppc64le JDK Mission Control 8.0.0 rh-jmc RHEL7 x86_64 Ruby 2.7.3 rh-ruby27 RHEL7 x86_64, s390x, aarch64, ppc64le Ruby 2.6.7 rh-ruby26 RHEL7 x86_64, s390x, aarch64, ppc64le Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 3.6 Git 2.27.0 rh-git227 RHEL7 x86_64, s390x, ppc64le nginx 1.18.0 rh-nginx118 RHEL7 x86_64, s390x, ppc64le Node.js 14.16.0 rh-nodejs14 RHEL7 x86_64, s390x, ppc64le Apache httpd 2.4.34 httpd24 RHEL7 x86_64, s390x, aarch64, ppc64le PHP 7.3.20 rh-php73 RHEL7 x86_64, s390x, aarch64, ppc64le HAProxy 1.8.24 rh-haproxy18 RHEL7 x86_64 Perl 5.30.1 rh-perl530 RHEL7 x86_64, s390x, aarch64, ppc64le Ruby 2.5.9 rh-ruby25 * RHEL7 x86_64, s390x, aarch64, ppc64le Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 3.5 Red Hat Developer Toolset 9.1 devtoolset-9 RHEL7 x86_64, s390x, aarch64, ppc64, ppc64le Python 3.8.6 rh-python38 RHEL7 x86_64, s390x, aarch64, ppc64le Varnish Cache 6.0.6 rh-varnish6 RHEL7 x86_64, s390x, aarch64, ppc64le Apache httpd 2.4.34 (the last update for RHEL6) httpd24 (RHEL6)* RHEL6 x86_64 Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 3.4 Node.js 12.21.0 rh-nodejs12 RHEL7 x86_64, s390x, aarch64, ppc64le nginx 1.16.1 rh-nginx116 RHEL7 x86_64, s390x, aarch64, ppc64le PostgreSQL 12.5 rh-postgresql12 RHEL7 x86_64, s390x, aarch64, ppc64le Maven 3.6.1 rh-maven36 RHEL7 x86_64, s390x, aarch64, ppc64le Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 3.3 Red Hat Developer Toolset 8.1 devtoolset-8 * RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64, ppc64le MariaDB 10.3.27 rh-mariadb103 RHEL7 x86_64, s390x, aarch64, ppc64le Redis 5.0.5 rh-redis5 RHEL7 x86_64, s390x, aarch64, ppc64le Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 3.2 PHP 7.2.24 rh-php72 * RHEL7 x86_64, s390x, aarch64, ppc64le MySQL 8.0.21 rh-mysql80 RHEL7 x86_64, s390x, aarch64, ppc64le Node.js 10.21.0 rh-nodejs10 * RHEL7 x86_64, s390x, aarch64, ppc64le nginx 1.14.1 rh-nginx114 * RHEL7 x86_64, s390x, aarch64, ppc64le Git 2.18.4 rh-git218 * RHEL7 x86_64, s390x, aarch64, ppc64le Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 3.1 Red Hat Developer Toolset 7.1 devtoolset-7 * RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64, ppc64le Perl 5.26.3 rh-perl526 * RHEL7 x86_64, s390x, aarch64, ppc64le MongoDB 3.6.3 rh-mongodb36 * RHEL7 x86_64, s390x, aarch64, ppc64le Varnish Cache 5.2.1 rh-varnish5 * RHEL7 x86_64, s390x, aarch64, ppc64le PostgreSQL 10.15 rh-postgresql10 RHEL7 x86_64, s390x, aarch64, ppc64le PHP 7.0.27 rh-php70 * RHEL6, RHEL7 x86_64 MySQL 5.7.24 rh-mysql57 * RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64le Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 3.0 PHP 7.1.8 rh-php71 * RHEL7 x86_64, s390x, aarch64, ppc64le nginx 1.12.1 rh-nginx112 * RHEL7 x86_64, s390x, aarch64, ppc64le Python 3.6.12 rh-python36 * RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64le Maven 3.5.0 rh-maven35 * RHEL7 x86_64, s390x, aarch64, ppc64le MariaDB 10.2.22 rh-mariadb102 * RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64le PostgreSQL 9.6.19 rh-postgresql96 * RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64le MongoDB 3.4.9 rh-mongodb34 * RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64le Node.js 8.11.4 rh-nodejs8 * RHEL7 x86_64, s390x, aarch64, ppc64le Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 2.4 Red Hat Developer Toolset 6.1 devtoolset-6 * RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64, ppc64le Scala 2.10.6 rh-scala210 * RHEL7 x86_64 nginx 1.10.2 rh-nginx110 * RHEL6, RHEL7 x86_64 Node.js 6.11.3 rh-nodejs6 * RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64le Ruby 2.4.6 rh-ruby24 * RHEL6, RHEL7 x86_64 Ruby on Rails 5.0.1 rh-ror50 * RHEL6, RHEL7 x86_64 Eclipse 4.6.3 rh-eclipse46 * RHEL7 x86_64 Python 2.7.18 python27 RHEL6*, RHEL7 x86_64, s390x, aarch64, ppc64le Thermostat 1.6.6 rh-thermostat16 * RHEL6, RHEL7 x86_64 Maven 3.3.9 rh-maven33 * RHEL6, RHEL7 x86_64 Common Java Packages rh-java-common * RHEL6, RHEL7 x86_64 Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 2.3 Git 2.9.3 rh-git29 * RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64le Redis 3.2.4 rh-redis32 * RHEL6, RHEL7 x86_64 Perl 5.24.0 rh-perl524 * RHEL6, RHEL7 x86_64 Python 3.5.1 rh-python35 * RHEL6, RHEL7 x86_64 MongoDB 3.2.10 rh-mongodb32 * RHEL6, RHEL7 x86_64 Ruby 2.3.8 rh-ruby23 * RHEL6, RHEL7 x86_64 PHP 5.6.25 rh-php56 * RHEL6, RHEL7 x86_64 Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 2.2 Red Hat Developer Toolset 4.1 devtoolset-4 * RHEL6, RHEL7 x86_64 MariaDB 10.1.29 rh-mariadb101 * RHEL6, RHEL7 x86_64 MongoDB 3.0.11 upgrade collection rh-mongodb30upg * RHEL6, RHEL7 x86_64 Node.js 4.6.2 rh-nodejs4 * RHEL6, RHEL7 x86_64 PostgreSQL 9.5.14 rh-postgresql95 * RHEL6, RHEL7 x86_64 Ruby on Rails 4.2.6 rh-ror42 * RHEL6, RHEL7 x86_64 MongoDB 2.6.9 rh-mongodb26 * RHEL6, RHEL7 x86_64 Thermostat 1.4.4 thermostat1 * RHEL6, RHEL7 x86_64 Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 2.1 Varnish Cache 4.0.3 rh-varnish4 * RHEL6, RHEL7 x86_64 nginx 1.8.1 rh-nginx18 * RHEL6, RHEL7 x86_64 Node.js 0.10 nodejs010 * RHEL6, RHEL7 x86_64 Maven 3.0.5 maven30 * RHEL6, RHEL7 x86_64 V8 3.14.5.10 v8314 * RHEL6, RHEL7 x86_64 Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 2.0 Red Hat Developer Toolset 3.1 devtoolset-3 * RHEL6, RHEL7 x86_64 Perl 5.20.1 rh-perl520 * RHEL6, RHEL7 x86_64 Python 3.4.2 rh-python34 * RHEL6, RHEL7 x86_64 Ruby 2.2.9 rh-ruby22 * RHEL6, RHEL7 x86_64 Ruby on Rails 4.1.5 rh-ror41 * RHEL6, RHEL7 x86_64 MariaDB 10.0.33 rh-mariadb100 * RHEL6, RHEL7 x86_64 MySQL 5.6.40 rh-mysql56 * RHEL6, RHEL7 x86_64 PostgreSQL 9.4.14 rh-postgresql94 * RHEL6, RHEL7 x86_64 Passenger 4.0.50 rh-passenger40 * RHEL6, RHEL7 x86_64 PHP 5.4.40 php54 * RHEL6, RHEL7 x86_64 PHP 5.5.21 php55 * RHEL6, RHEL7 x86_64 nginx 1.6.2 nginx16 * RHEL6, RHEL7 x86_64 DevAssistant 0.9.3 devassist09 * RHEL6, RHEL7 x86_64 Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 1 Git 1.9.4 git19 * RHEL6, RHEL7 x86_64 Perl 5.16.3 perl516 * RHEL6, RHEL7 x86_64 Python 3.3.2 python33 * RHEL6, RHEL7 x86_64 Ruby 1.9.3 ruby193 * RHEL6, RHEL7 x86_64 Ruby 2.0.0 ruby200 * RHEL6, RHEL7 x86_64 Ruby on Rails 4.0.2 ror40 * RHEL6, RHEL7 x86_64 MariaDB 5.5.53 mariadb55 * RHEL6, RHEL7 x86_64 MongoDB 2.4.9 mongodb24 * RHEL6, RHEL7 x86_64 MySQL 5.5.52 mysql55 * RHEL6, RHEL7 x86_64 PostgreSQL 9.2.18 postgresql92 * RHEL6, RHEL7 x86_64 Legend: RHEL6 - Red Hat Enterprise Linux 6 RHEL7 - Red Hat Enterprise Linux 7 x86_64 - AMD and Intel 64-bit architectures s390x - The 64-bit IBM Z architecture aarch64 - The 64-bit ARM architecture ppc64 - IBM POWER, big endian ppc64le - IBM POWER, little endian * - Retired component; this Software Collection is no longer supported The tables above list the latest versions available through asynchronous updates. Note that Software Collections released in Red Hat Software Collections 2.0 and later include a rh- prefix in their names. Eclipse is available as a part of the Red Hat Developer Tools offering. 1.3. Changes in Red Hat Software Collections 3.7 1.3.1. Overview Architectures The Red Hat Software Collections offering contains packages for Red Hat Enterprise Linux 7 running on the following architectures: AMD and Intel 64-bit architectures 64-bit IBM Z IBM POWER, little endian For a full list of components and their availability, see Table 1.2, "All Available Software Collections" . New Software Collections Red Hat Software Collections 3.7 adds the following new Software Collections: rh-mariadb105 - see Section 1.3.3, "Changes in MariaDB" rh-postgresql13 - see Section 1.3.4, "Changes in PostgreSQL" rh-ruby30 - see Section 1.3.5, "Changes in Ruby" All new Software Collections are available only for Red Hat Enterprise Linux 7. Updated Software Collections The following components has been updated in Red Hat Software Collections 3.7: devtoolset-10 - see Section 1.3.2, "Changes in Red Hat Developer Toolset" rh-jmc - see Section 1.3.6, "Changes in JDK Mission Control" rh-ruby27 - see Section 1.3.5, "Changes in Ruby" rh-ruby26 - see Section 1.3.5, "Changes in Ruby" In addition, a new package, rh-postgresql12-pg_repack is now available for PostgreSQL 12. Red Hat Software Collections Container Images The following container images are new in Red Hat Software Collections 3.7: rhscl/mariadb-105-rhel7 rhscl/postgresql-13-rhel7 rhscl/ruby-30-rhel7 The following container images have been updated in Red Hat Software Collections 3.7 rhscl/devtoolset-10-toolchain-rhel7 rhscl/devtoolset-10-perftools-rhel7 rhscl/ruby-27-rhel7 rhscl/ruby-26-rhel7 For more information about Red Hat Software Collections container images, see Section 3.4, "Red Hat Software Collections Container Images" . 1.3.2. Changes in Red Hat Developer Toolset The following components have been upgraded in Red Hat Developer Toolset 10.1 compared to the release: SystemTap to version 4.4 Dyninst to version 10.2.1 elfutils to version 0.182 In addition, bug fix updates are available for the following components: GCC GDB binutils annobin For detailed information on changes in 10.1, see the Red Hat Developer Toolset User Guide . 1.3.3. Changes in MariaDB The new rh-mariadb105 Software Collection provides MariaDB 10.5.9 . Notable enhancements over the previously available version 10.3 include: MariaDB now uses the unix_socket authentication plug-in by default. The plug-in enables users to use operating system credentials when connecting to MariaDB through the local Unix socket file. MariaDB supports a new FLUSH SSL command to reload SSL certificates without a server restart. MariaDB adds mariadb-* named binaries and mysql* symbolic links pointing to the mariadb-* binaires. For example, the mysqladmin , mysqlaccess , and mysqlshow symlinks point to the mariadb-admin , mariadb-access , and mariadb-show binaries, respectively. MariaDB supports a new INET6 data type for storing IPv6 addresses. MariaDB now uses the Perl Compatible Regular Expressions (PCRE) library version 2. The SUPER privilege has been split into several privileges to better align with each user role. As a result, certain statements have changed required privileges. MariaDB adds a new global variable, binlog_row_metadata , as well as system variables and status variables to control the amount of metadata logged. The default value of the eq_range_index_dive_limit variable has been changed from 0 to 200 . A new SHUTDOWN WAIT FOR ALL SLAVES server command and a new mysqladmin shutdown --wait-for-all-slaves option have been added to instruct the server to shut down only after the last binlog event has been sent to all connected replicas. In parallel replication, the slave_parallel_mode variable now defaults to optimistic . The InnoDB storage engine introduces the following changes: InnoDB now supports an instant DROP COLUMN operation and enables users to change the column order. Defaults of the following variables have been changed: innodb_adaptive_hash_index to OFF and innodb_checksum_algorithm to full_crc32 . Several InnoDB variables have been removed or deprecated. MariaDB Galera Cluster has been upgraded to version 4 with the following notable changes: Galera adds a new streaming replication feature, which supports replicating transactions of unlimited size. During an execution of streaming replication, a cluster replicates a transaction in small fragments. Galera now fully supports Global Transaction ID (GTID). The default value for the wsrep_on option in the /etc/my.cnf.d/galera.cnf file has changed from 1 to 0 to prevent end users from starting wsrep replication without configuring required additional options. Changes to the PAM plug-in in MariaDB 10.5 include: MariaDB 10.5 adds a new version of the Pluggable Authentication Modules (PAM) plug-in. The PAM plug-in version 2.0 performs PAM authentication using a separate setuid root helper binary, which enables MariaDB to utilize additional PAM modules. In MariaDB 10.5 , the Pluggable Authentication Modules (PAM) plug-in and its related files have been moved to a new subpackage, mariadb-pam . This subpackage contains both PAM plug-in versions: version 2.0 is the default, and version 1.0 is available as the auth_pam_v1 shared object library. The rh-mariadb105-mariadb-pam package is not installed by default with the MariaDB server. To make the PAM authentication plug-in available in MariaDB 10.5 , install the rh-mariadb105-mariadb-pam package manually. The rh-mariadb105 Software Collection includes the rh-mariadb105-syspaths package, which installs packages that provide system-wide wrappers for binaries, scripts, manual pages, and other. After installing the rh-mariadb105*-syspaths packages, users are not required to use the scl enable command for correct functioning of the binaries and scripts provided by the rh-mariadb105* packages. Note that the *-syspaths packages conflict with the corresponding packages from the base Red Hat Enterprise Linux system. To find out more about syspaths, see the Red Hat Software Collections Packaging Guide . For compatibility notes and migration instructions, see Section 5.1, "Migrating to MariaDB 10.5" . For detailed changes in MariaDB 10.5 , see the upstream documentation . 1.3.4. Changes in PostgreSQL The new rh-postgresql13 Software Collection includes PostgreSQL 13.2 . This release introduces various enhancements over version 12, such as: Performance improvements resulting from de-duplication of B-tree index entries Improved performance for queries that use aggregates or partitioned tables Improved query planning when using extended statistics Parallelized vacuuming of indexes Incremental sorting For detailed changes, see the upstream release notes for PostgreSQL 13 . The following new subpackages are available with the rh-postgresql13 Software Collection: The pg_repack package provides a PostgreSQL extension that lets you remove bloat from tables and indexes, and optionally restore the physical order of clustered indexes. For details, see the upstream documentation regarding usage and examples . The pg_repack subpackage is now available also for the rh-postgresql12 Software Collection. The plpython3 package provides the PL/Python procedural language extension based on Python 3 . PL/Python enables you to write PostgreSQL functions in the Python programming language. For details, see the upstream documentation . Previously released PostgreSQL Software Collections include only the plpython package based on Python 2 . Red Hat Enterprise Linux 8 provides only plpython3 . The rh-postgresql13 Software Collection includes both plpython and plpython3 , so that you can migrate to plpython3 before upgrading to Red Hat Enterprise Linux 8. In addition, the rh-postgresql13 Software Collection includes the rh-postgresql13-syspaths package, which installs packages that provide system-wide wrappers for binaries, scripts, manual pages, and others. After installing the rh-postgresql13*-syspaths packages, users are not required to use the scl enable command for correct functioning of the binaries and scripts provided by the rh-postgresql13* packages. Note that the *-syspaths packages conflict with the corresponding packages from the base Red Hat Enterprise Linux system. To find out more about syspaths, see the Red Hat Software Collections Packaging Guide . Note that support for Just-In-Time (JIT) compilation, available in upstream since PostgreSQL 11 , is not provided by the rh-postgresql13 Software Collection. For information on migration, see Section 5.3, "Migrating to PostgreSQL 13" . 1.3.5. Changes in Ruby The new rh-ruby30 Software Collection provides Ruby 3.0.1 , which introduces a number of performance improvements, bug fixes, and new features. Notable enhancements include: Concurrency and parallelism features: Ractor , an Actor-model abstraction that provides thread-safe parallel execution, is provided as an experimental feature. Fiber Scheduler has been introduced as an experimental feature. Fiber Scheduler intercepts blocking operations, which enables light-weight concurrency without changing existing code. Static analysis features: The RBS language has been introduced, which describes the structure of Ruby programs. The rbs gem has been added to parse type definitions written in RBS . The TypeProf utility has been introduced, which is a type analysis tool for Ruby code. Pattern matching with the case / in expression is no longer experimental. One-line pattern matching has been redesigned as an experimental feature. Find pattern has been added as an experimental feature. The following performance improvements have been implemented: Pasting long code to the Interactive Ruby Shell (IRB) is now significantly faster. The measure command has been added to IRB for time measurement. Other notable changes include: Keyword arguments have been separated from other arguments, see the upstream documentation for details. The default directory for user-installed gems is now USDHOME/.local/share/gem/ unless the USDHOME/.gem/ directory is already present. For more information about changes in Ruby 3.0 , see the upstream announcement for version 3.0.0 and 3.0.1 . The rh-ruby27 and rh-ruby26 Software Collections have been updated with security and bug fixes. 1.3.6. Changes in JDK Mission Control JDK Mission Control (JMC), provided by the rh-jmc Software Collection, has been upgraded from version 7.1.1 to version 8.0.0. Notable enhancements include: The Treemap viewer has been added to the JOverflow plug-in for visualizing memory usage by classes. The Threads graph has been enhanced with more filtering and zoom options. JDK Mission Control now provides support for opening JDK Flight Recorder recordings compressed with the LZ4 algorithm. New columns have been added to the Memory and TLAB views to help you identify areas of allocation pressure. Graph view has been added to improve visualization of stack traces. The Percentage column has been added to histogram tables. For more information, see the upstream release notes . 1.4. Compatibility Information Red Hat Software Collections 3.7 is available for all supported releases of Red Hat Enterprise Linux 7 on AMD and Intel 64-bit architectures, 64-bit IBM Z, and IBM POWER, little endian. Certain previously released components are available also for the 64-bit ARM architecture. For a full list of available components, see Table 1.2, "All Available Software Collections" . 1.5. Known Issues rh-mariadb105 component, BZ# 1942526 When the OQGraph storage engine plug-in is loaded to the MariaDB 10.5 server, MariaDB does not warn about dropping a non-existent table. In particular, when the user attempts to drop a non-existent table using the DROP TABLE or DROP TABLE IF EXISTS SQL commands, MariaDB neither returns an error message nor logs a warning. Note that the OQGraph plug-in is provided by the mariadb-oqgraph-engine package, which is not installed by default. rh-mariadb component The rh-mariadb103 Software Collection provides the Pluggable Authentication Modules (PAM) plug-in version 1.0. The rh-mariadb105 Software Collection provides the plug-in versions 1.0 and 2.0, version 2.0 is the default. The PAM plug-in version 1.0 in MariaDB does not work. To work around this problem, use the PAM plug-in version 2.0 provided by rh-mariadb105 . rh-ruby27 component, BZ# 1836201 When a custom script requires the Psych YAML parser and afterwards uses the Gem.load_yaml method, running the script fails with the following error message: To work around this problem, add the gem 'psych' line to the script somewhere above the require 'psych' line: ... gem 'psych' ... require 'psych' Gem.load_yaml multiple components, BZ# 1716378 Certain files provided by the Software Collections debuginfo packages might conflict with the corresponding debuginfo package files from the base Red Hat Enterprise Linux system or from other versions of Red Hat Software Collections components. For example, the python27-python-debuginfo package files might conflict with the corresponding files from the python-debuginfo package installed on the core system. Similarly, files from the httpd24-mod_auth_mellon-debuginfo package might conflict with similar files provided by the base system mod_auth_mellon-debuginfo package. To work around this problem, uninstall the base system debuginfo package prior to installing the Software Collection debuginfo package. rh-mysql80 component, BZ# 1646363 The mysql-connector-java database connector does not work with the MySQL 8.0 server. To work around this problem, use the mariadb-java-client database connector from the rh-mariadb103 Software Collection. rh-mysql80 component, BZ# 1646158 The default character set has been changed to utf8mb4 in MySQL 8.0 but this character set is unsupported by the php-mysqlnd database connector. Consequently, php-mysqlnd fails to connect in the default configuration. To work around this problem, specify a known character set as a parameter of the MySQL server configuration. For example, modify the /etc/opt/rh/rh-mysql80/my.cnf.d/mysql-server.cnf file to read: httpd24 component, BZ# 1429006 Since httpd 2.4.27 , the mod_http2 module is no longer supported with the default prefork Multi-Processing Module (MPM). To enable HTTP/2 support, edit the configuration file at /opt/rh/httpd24/root/etc/httpd/conf.modules.d/00-mpm.conf and switch to the event or worker MPM. Note that the HTTP/2 server-push feature does not work on the 64-bit ARM architecture, 64-bit IBM Z, and IBM POWER, little endian. httpd24 component, BZ# 1224763 When using the mod_proxy_fcgi module with FastCGI Process Manager (PHP-FPM), httpd uses port 8000 for the FastCGI protocol by default instead of the correct port 9000 . To work around this problem, specify the correct port explicitly in configuration. httpd24 component, BZ# 1382706 When SELinux is enabled, the LD_LIBRARY_PATH environment variable is not passed through to CGI scripts invoked by httpd . As a consequence, in some cases it is impossible to invoke executables from Software Collections enabled in the /opt/rh/httpd24/service-environment file from CGI scripts run by httpd . To work around this problem, set LD_LIBRARY_PATH as desired from within the CGI script. httpd24 component Compiling external applications against the Apache Portable Runtime (APR) and APR-util libraries from the httpd24 Software Collection is not supported. The LD_LIBRARY_PATH environment variable is not set in httpd24 because it is not required by any application in this Software Collection. scl-utils component In Red Hat Enterprise Linux 7.5 and earlier, due to an architecture-specific macro bug in the scl-utils package, the <collection>/root/usr/lib64/ directory does not have the correct package ownership on the 64-bit ARM architecture and on IBM POWER, little endian. As a consequence, this directory is not removed when a Software Collection is uninstalled. To work around this problem, manually delete <collection>/root/usr/lib64/ when removing a Software Collection. maven component When the user has installed both the Red Hat Enterprise Linux system version of maven-local package and the rh-maven*-maven-local package, XMvn , a tool used for building Java RPM packages, run from the Maven Software Collection tries to read the configuration file from the base system and fails. To work around this problem, uninstall the maven-local package from the base Red Hat Enterprise Linux system. perl component It is impossible to install more than one mod_perl.so library. As a consequence, it is not possible to use the mod_perl module from more than one Perl Software Collection. httpd , mariadb , mysql , nodejs , perl , php , python , and ruby components, BZ# 1072319 When uninstalling the httpd24 , rh-mariadb* , rh-mysql* , rh-nodejs* , rh-perl* , rh-php* , python27 , rh-python* , or rh-ruby* packages, the order of uninstalling can be relevant due to ownership of dependent packages. As a consequence, some directories and files might not be removed properly and might remain on the system. mariadb , mysql components, BZ# 1194611 Since MariaDB 10 and MySQL 5.6 , the rh-mariadb*-mariadb-server and rh-mysql*-mysql-server packages no longer provide the test database by default. Although this database is not created during initialization, the grant tables are prefilled with the same values as when test was created by default. As a consequence, upon a later creation of the test or test_* databases, these databases have less restricted access rights than is default for new databases. Additionally, when running benchmarks, the run-all-tests script no longer works out of the box with example parameters. You need to create a test database before running the tests and specify the database name in the --database parameter. If the parameter is not specified, test is taken by default but you need to make sure the test database exist. mariadb , mysql , postgresql components Red Hat Software Collections contains the MySQL 8.0 , MariaDB 10.3 , MariaDB 10.5 , PostgreSQL 10 , PostgreSQL 12 , and PostgreSQL 13 database servers. The core Red Hat Enterprise Linux 7 provides earlier versions of the MariaDB and PostgreSQL databases (client library and daemon). Client libraries are also used in database connectors for dynamic languages, libraries, and so on. The client library packaged in the Red Hat Software Collections database packages in the PostgreSQL component is not supposed to be used, as it is included only for purposes of server utilities and the daemon. Users are instead expected to use the system library and the database connectors provided with the core system. A protocol, which is used between the client library and the daemon, is stable across database versions, so, for example, using the PostgreSQL 10 client library with the PostgreSQL 12 or 13 daemon works as expected. mariadb , mysql components MariaDB and MySQL do not make use of the /opt/ provider / collection /root prefix when creating log files. Note that log files are saved in the /var/opt/ provider / collection /log/ directory, not in /opt/ provider / collection /root/var/log/ . 1.6. Other Notes rh-ruby* , rh-python* , rh-php* components Using Software Collections on a read-only NFS has several limitations. Ruby gems cannot be installed while the rh-ruby* Software Collection is on a read-only NFS. Consequently, for example, when the user tries to install the ab gem using the gem install ab command, an error message is displayed, for example: The same problem occurs when the user tries to update or install gems from an external source by running the bundle update or bundle install commands. When installing Python packages on a read-only NFS using the Python Package Index (PyPI), running the pip command fails with an error message similar to this: Installing packages from PHP Extension and Application Repository (PEAR) on a read-only NFS using the pear command fails with the error message: This is an expected behavior. httpd component Language modules for Apache are supported only with the Red Hat Software Collections version of Apache httpd and not with the Red Hat Enterprise Linux system versions of httpd . For example, the mod_wsgi module from the rh-python35 Collection can be used only with the httpd24 Collection. all components Since Red Hat Software Collections 2.0, configuration files, variable data, and runtime data of individual Collections are stored in different directories than in versions of Red Hat Software Collections. coreutils , util-linux , screen components Some utilities, for example, su , login , or screen , do not export environment settings in all cases, which can lead to unexpected results. It is therefore recommended to use sudo instead of su and set the env_keep environment variable in the /etc/sudoers file. Alternatively, you can run commands in a reverse order; for example: instead of When using tools like screen or login , you can use the following command to preserve the environment settings: source /opt/rh/<collection_name>/enable python component When the user tries to install more than one scldevel package from the python27 and rh-python* Software Collections, a transaction check error message is returned. This is an expected behavior because the user can install only one set of the macro files provided by the packages ( %scl_python , %scl_ prefix _python ). php component When the user tries to install more than one scldevel package from the rh-php* Software Collections, a transaction check error message is returned. This is an expected behavior because the user can install only one set of the macro files provided by the packages ( %scl_php , %scl_ prefix _php ). ruby component When the user tries to install more than one scldevel package from the rh-ruby* Software Collections, a transaction check error message is returned. This is an expected behavior because the user can install only one set of the macro files provided by the packages ( %scl_ruby , %scl_ prefix _ruby ). perl component When the user tries to install more than one scldevel package from the rh-perl* Software Collections, a transaction check error message is returned. This is an expected behavior because the user can install only one set of the macro files provided by the packages ( %scl_perl , %scl_ prefix _perl ). nginx component When the user tries to install more than one scldevel package from the rh-nginx* Software Collections, a transaction check error message is returned. This is an expected behavior because the user can install only one set of the macro files provided by the packages ( %scl_nginx , %scl_ prefix _nginx ). python component To mitigate the Web Cache Poisoning CVE-2021-23336 in the Python urllib library, the default separator for the urllib.parse.parse_qsl and urllib.parse.parse_qs functions is being changed from both ampersand ( & ) and semicolon ( ; ) to only an ampersand. This change has been implemented in the python27 and rh-python38 Software Collections with the release of the RHSA-2021:3252 and RHSA-2021:3254 advisories. The change of the default separator is potentially backwards incompatible, therefore Red Hat provides a way to configure the behavior in Python packages where the default separator has been changed. In addition, the affected urllib parsing functions issue a warning if they detect that a customer's application has been affected by the change. For more information, see the Mitigation of Web Cache Poisoning in the Python urllib library (CVE-2021-23336) Knowledgebase article. python component The release of the RHSA-2021:3254 advisory introduces the following change in the rh-python38 Software Collection: To mitigate CVE-2021-29921 , the Python ipaddress module now rejects IPv4 addresses with leading zeros with an AddressValueError: Leading zeros are not permitted error. Customers who rely on the behavior can pre-process their IPv4 address inputs to strip the leading zeros off. For example: To strip the leading zeros off with an explicit loop for readability, use: 1.7. Deprecated Functionality httpd24 component, BZ# 1434053 Previously, in an SSL/TLS configuration requiring name-based SSL virtual host selection, the mod_ssl module rejected requests with a 400 Bad Request error, if the host name provided in the Host: header did not match the host name provided in a Server Name Indication (SNI) header. Such requests are no longer rejected if the configured SSL/TLS security parameters are identical between the selected virtual hosts, in-line with the behavior of upstream mod_ssl .
[ "superclass mismatch for class Mark (TypeError)", "gem 'psych' require 'psych' Gem.load_yaml", "[mysqld] character-set-server=utf8", "ERROR: While executing gem ... (Errno::EROFS) Read-only file system @ dir_s_mkdir - /opt/rh/rh-ruby22/root/usr/local/share/gems", "Read-only file system: '/opt/rh/rh-python34/root/usr/lib/python3.4/site-packages/ipython-3.1.0.dist-info'", "Cannot install, php_dir for channel \"pear.php.net\" is not writeable by the current user", "su -l postgres -c \"scl enable rh-postgresql94 psql\"", "scl enable rh-postgresql94 bash su -l postgres -c psql", ">>> def reformat_ip(address): return '.'.join(part.lstrip('0') if part != '0' else part for part in address.split('.')) >>> reformat_ip('0127.0.0.1') '127.0.0.1'", "def reformat_ip(address): parts = [] for part in address.split('.'): if part != \"0\": part = part.lstrip('0') parts.append(part) return '.'.join(parts)" ]
https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/3.7_release_notes/chap-RHSCL
Chapter 5. References
Chapter 5. References See the following references materials to learn more. 5.1. Reference materials To learn more about the vulnerability service, see the following resources: Generating Vulnerability Service Reports Red Hat Insights for Red Hat Enterprise Linux Documentation Red Hat Insights for Red Hat Enterprise Linux Product Support page
null
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/assessing_and_monitoring_security_vulnerabilities_on_rhel_systems/vuln-ref-materials_vuln-overview
Chapter 1. Node APIs
Chapter 1. Node APIs 1.1. Node [v1] Description Node is a worker node in Kubernetes. Each node will have a unique identifier in the cache (i.e. in etcd). Type object 1.2. PerformanceProfile [performance.openshift.io/v2] Description PerformanceProfile is the Schema for the performanceprofiles API Type object 1.3. Profile [tuned.openshift.io/v1] Description Profile is a specification for a Profile resource. Type object 1.4. RuntimeClass [node.k8s.io/v1] Description RuntimeClass defines a class of container runtime supported in the cluster. The RuntimeClass is used to determine which container runtime is used to run all containers in a pod. RuntimeClasses are manually defined by a user or cluster provisioner, and referenced in the PodSpec. The Kubelet is responsible for resolving the RuntimeClassName reference before running the pod. For more details, see https://kubernetes.io/docs/concepts/containers/runtime-class/ Type object 1.5. Tuned [tuned.openshift.io/v1] Description Tuned is a collection of rules that allows cluster-wide deployment of node-level sysctls and more flexibility to add custom tuning specified by user needs. These rules are translated and passed to all containerized Tuned daemons running in the cluster in the format that the daemons understand. The responsibility for applying the node-level tuning then lies with the containerized Tuned daemons. More info: https://github.com/openshift/cluster-node-tuning-operator Type object
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/node_apis/node-apis
probe::socket.readv.return
probe::socket.readv.return Name probe::socket.readv.return - Conclusion of receiving a message via sock_readv Synopsis socket.readv.return Values name Name of this probe protocol Protocol value family Protocol family value success Was receive successful? (1 = yes, 0 = no) state Socket state value flags Socket flags value size Size of message received (in bytes) or error code if success = 0 type Socket type value Context The message receiver. Description Fires at the conclusion of receiving a message on a socket via the sock_readv function
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-socket-readv-return
Chapter 7. Scaling storage of Red Hat Virtualization OpenShift Data Foundation cluster
Chapter 7. Scaling storage of Red Hat Virtualization OpenShift Data Foundation cluster To scale the storage capacity of your configured Red Hat OpenShift Data Foundation worker nodes on Red Hat Virtualization cluster, you can increase the capacity by adding three disks at a time. Three disks are needed since OpenShift Data Foundation uses a replica count of 3 to maintain the high availability. So the amount of storage consumed is three times the usable space. Note Usable space might vary when encryption is enabled or replica 2 pools are being used. 7.1. Scaling up storage capacity on a cluster To increase the storage capacity in a dynamically created storage cluster on an user-provisioned infrastructure, you can add storage capacity and performance to your configured Red Hat OpenShift Data Foundation worker nodes. Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. The disk should be of the same size and type as used during initial deployment. Procedure Log in to the OpenShift Web Console. Click Operators Installed Operators . Click OpenShift Data Foundation Operator. Click the Storage Systems tab. Click the Action Menu (...) on the far right of the storage system name to extend the options menu. Select Add Capacity from the options menu. Select the Storage Class . Choose the storage class which you wish to use to provision new storage devices. Click Add . To check the status, navigate to Storage Data Foundation and verify that the Storage System in the Status card has a green tick. Verification steps Verify the Raw Capacity card. In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Block and File tab, check the Raw Capacity card. Note that the capacity increases based on your selections. Note The raw capacity does not take replication into account and shows the full capacity. Verify that the new OSDs and their corresponding new Persistent Volume Claims (PVCs) are created. To view the state of the newly created OSDs: Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. To view the state of the PVCs: Click Storage Persistent Volume Claims from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. Identify the nodes where the new OSD pods are running. <OSD-pod-name> Is the name of the OSD pod. For example: Example output: For each of the nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the selected hosts. <node-name> Is the name of the node. Check for the crypt keyword beside the ocs-deviceset names. Important Cluster reduction is supported only with the Red Hat Support Team's assistance. 7.2. Scaling out storage capacity on a Red Hat Virtualization cluster OpenShift Data Foundation is highly scalable. It can be scaled out by adding new nodes with required storage and enough hardware resources in terms of CPU and RAM. Practically there is no limit on the number of nodes which can be added but from the support perspective 2000 nodes is the limit for OpenShift Data Foundation. Scaling out storage capacity can be broken down into two steps Adding new node Scaling up the storage capacity Note OpenShift Data Foundation does not support heterogeneous OSD/Disk sizes. 7.2.1. Adding a node to an installer-provisioned infrastructure Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. Procedure Navigate to Compute Machine Sets . On the machine set where you want to add nodes, select Edit Machine Count . Add the amount of nodes, and click Save . Click Compute Nodes and confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node. For the new node, click Action menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . Note It is recommended to add 3 nodes, one each in different zones. You must add 3 nodes and perform this procedure for all of them. In case of bare metal installer-provisioned infrastructure deployment, you must expand the cluster first. For instructions, see Expanding the cluster . Verification steps Execute the following command the terminal and verify that the new node is present in the output: On the OpenShift web console, click Workloads Pods , confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* 7.2.2. Adding a node using a local storage device You can add nodes to increase the storage capacity when existing worker nodes are already running at their maximum supported OSDs or when there are not enough resources to add new OSDs on the existing nodes. Add nodes in the multiple of 3, each of them in different failure domains. Though it is recommended to add nodes in multiples of 3 nodes, you have the flexibility to add one node at a time in flexible scaling deployment. See Knowledgebase article Verify if flexible scaling is enabled Note OpenShift Data Foundation does not support heterogeneous disk size and types. The new nodes to be added should have the disk of the same type and size which was used during initial OpenShift Data Foundation deployment. Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. Procedure Depending on the type of infrastructure, perform the following steps: Get a new machine with the required infrastructure. See Platform requirements . Create a new OpenShift Container Platform worker node using the new machine. Check for certificate signing requests (CSRs) that are in Pending state. Approve all the required CSRs for the new node. <Certificate_Name> Is the name of the CSR. Click Compute Nodes , confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From User interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From Command line interface Apply the OpenShift Data Foundation label to the new node. <new_node_name> Is the name of the new node. Click Operators Installed Operators from the OpenShift Web Console. From the Project drop-down list, make sure to select the project where the Local Storage Operator is installed. Click Local Storage . Click the Local Volume Discovery tab. Beside the LocalVolumeDiscovery , click Action menu (...) Edit Local Volume Discovery . In the YAML, add the hostname of the new node in the values field under the node selector. Click Save . Click the Local Volume Sets tab. Beside the LocalVolumeSet , click Action menu (...) Edit Local Volume Set . In the YAML, add the hostname of the new node in the values field under the node selector . Click Save . Note It is recommended to add 3 nodes, one each in different zones. You must add 3 nodes and perform this procedure for all of them. Verification steps Execute the following command the terminal and verify that the new node is present in the output: On the OpenShift web console, click Workloads Pods , confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* 7.2.3. Scaling up storage capacity To scale up storage capacity: For dynamic storage devices, see Scaling up storage capacity on a cluster . For local storage devices, see Scaling up a cluster created using local storage devices
[ "oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/ <OSD-pod-name>", "oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm", "NODE compute-1", "oc debug node/ <node-name>", "chroot /host", "lsblk", "oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1", "oc get csr", "oc adm certificate approve <Certificate_Name>", "oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"", "oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/scaling_storage/scaling_storage_of_red_hat_virtualization_openshift_data_foundation_cluster
3.8. Enabling IP Multicast with IGMP
3.8. Enabling IP Multicast with IGMP The Internet Group Management Protocol (IGMP) enables the administrator to manage routing and subscription to multicast traffic between networks, hosts, and routers. The kernel in Red Hat Enterprise Linux supports IGMPv3. To display multicast information, use the ip maddr show subcommand, for example: Alternatively, look for the MULTICAST string in the ip link show command output, for example: To disable multicast on a device and to check that multicast is disabled on the br0 device: The missing MULTICAST string indicates that multicast is disabled. To enable multicast on the br0 device and to check it is enabled: See the ip Command Cheat Sheet for Red Hat Enterprise Linux article and the ip(8) man page for more information. To check current version of IGMP and IP addresses subscribed for multicasting, see the /proc/net/igmp file: Note IGMP is not enabled in firewalld by default. To enable IGMP for a zone: See the Using Firewalls chapter in the Red Hat Enterprise Linux Security Guide for more information.
[ "~]USD ip maddr show dev br0 8: br0 inet 224.0.0.1 inet6 ff02::1 inet6 ff01::1 [output truncated]", "~]USD ip link show br0 8: br0: <BROADCAST, MULTICAST ,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT qlen 1000 link/ether 6c:0b:84:67:fe:63 brd ff:ff:ff:ff:ff:ff", "~]# ip link set multicast off dev br0 ~]USD ip link show br0 8: br0: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT qlen 1000 link/ether 6c:0b:84:67:fe:63 brd ff:ff:ff:ff:ff:ff", "~]# ip link set multicast on dev br0 ~]USD ip link show br0 8: br0: <BROADCAST, MULTICAST ,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT qlen 1000 link/ether 6c:0b:84:67:fe:63 brd ff:ff:ff:ff:ff:ff", "~]USD cat /proc/net/igmp", "~]# firewall-cmd --zone= zone-name --add-protocol=igmp" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/sec-enabling_ip_multicast_with_igmp
1.2.3.3. Converged Networks
1.2.3.3. Converged Networks Communication over the network is normally done through Ethernet, with storage traffic using a dedicated Fibre Channel SAN environment. It is common to have a dedicated network or serial link for system management, and perhaps even heartbeat [2] . As a result, a single server is typically on multiple networks. Providing multiple connections on each server is expensive, bulky, and complex to manage. This gave rise to the need for a way to consolidate all connections into one. Fibre Channel over Ethernet (FCoE) and Internet SCSI (iSCSI) address this need. FCoE With FCoE, standard fibre channel commands and data packets are transported over a 10GbE physical infrastructure via a single converged network adapter (CNA). Standard TCP/IP ethernet traffic and fibre channel storage operations can be transported via the same link. FCoE uses one physical network interface card (and one cable) for multiple logical network/storage connections. FCoE offers the following advantages: Reduced number of connections FCoE reduces the number of network connections to a server by half. You can still choose to have multiple connections for performance or availability; however, a single connection provides both storage and network connectivity. This is especially helpful for pizza box servers and blade servers, since they both have very limited space for components. Lower cost Reduced number of connections immediately means reduced number of cables, switches, and other networking equipment. Ethernet's history also features great economies of scale; the cost of networks drops dramatically as the number of devices in the market goes from millions to billions, as was seen in the decline in the price of 100Mb Ethernet and gigabit Ethernet devices. Similarly, 10GbE will also become cheaper as more businesses adapt to its use. Also, as CNA hardware is integrated into a single chip, widespread use will also increase its volume in the market, which will result in a significant price drop over time. iSCSI Internet SCSI (iSCSI) is another type of converged network protocol; it is an alternative to FCoE. Like fibre channel, iSCSI provides block-level storage over a network. However, iSCSI does not provide a complete management environment. The main advantage of iSCSI over FCoE is that iSCSI provides much of the capability and flexibility of fibre channel, but at a lower cost. [2] Heartbeat is the exchange of messages between systems to ensure that each system is still functioning. If a system "loses heartbeat" it is assumed to have failed and is shut down, with another system taking over for it.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/performance_tuning_guide/distributed-systems-fcoe
probe::nfs.aop.release_page
probe::nfs.aop.release_page Name probe::nfs.aop.release_page - NFS client releasing page Synopsis nfs.aop.release_page Values size release pages ino inode number dev device identifier __page the address of page page_index offset within mapping, can used a page identifier and position identifier in the page frame Description Fires when do a release operation on NFS.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-nfs-aop-release-page
Chapter 52. Oracle WebLogic Server
Chapter 52. Oracle WebLogic Server Oracle WebLogic Server is a Java EE application server that provides a standard set of APIs for creating distributed Java applications that can access a wide variety of services, such as databases, messaging services, and connections to external enterprise systems. User clients access these applications using web browser clients or Java clients.
null
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/installing_and_configuring_red_hat_decision_manager/wls-con
Chapter 40. Groovy
Chapter 40. Groovy Since Camel 1.3 Camel has support for using Groovy . For example, you can use Groovy in a Predicate with the Message Filter EIP. 40.1. Dependencies When using camel-groovy with Red Hat build of Camel Spring Boot, add the following Maven dependency to your pom.xml to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-groovy-starter</artifactId> </dependency> 40.2. URI Format The camel-groovy language component uses the following URI notation: groovy("someGroovyExpression") 40.3. Groovy Options The Groovy language supports 2 options, which are listed below. Name Default Java Type Description resultType String Sets the class of the result type (type from output). trim true Boolean Whether to trim the value to remove leading and trailing whitespaces and line breaks. 40.4. Examples Following example uses a groovy script as predicate in the message filter, to determine if any line items are over USD100: Java from("queue:foo") .filter(groovy("request.lineItems.any { i -> i.value > 100 }")) .to("queue:bar") XML DSL <route> <from uri="queue:foo"/> <filter> <groovy>request.lineItems.any { i -> i.value > 100 }</groovy> <to uri="queue:bar"/> </filter> </route> 40.5. Groovy Context Camel provides an exchange information in the Groovy context (just a Map ). The Exchange is transferred as: key value exchange The Exchange itself. exchangeProperties The Exchange properties. variables The variables headers The headers of the In message. camelContext The Camel Context. request The In message. body The In message body. response The Out message (only for InOut message exchange pattern). 40.6. How to get the result from multiple statements script As the Groovy script engine evaluate method returns a Null if it runs a multiple statements script. Camel looks up the value of script result by using the key of result from the value set. If you have multiple statements scripts, make sure to set the value of result variable as the script return value. bar = "baz"; # some other statements ... # camel take the result value as the script evaluation result result = body * 2 + 1 40.7. Customizing Groovy Shell For very special use cases you may need to use a custom GroovyShell instance in your Groovy expressions. To provide the custom GroovyShell , add an implementation of the org.apache.camel.language.groovy.GroovyShellFactory SPI interface to the Camel registry. public class CustomGroovyShellFactory implements GroovyShellFactory { public GroovyShell createGroovyShell(Exchange exchange) { ImportCustomizer importCustomizer = new ImportCustomizer(); importCustomizer.addStaticStars("com.example.Utils"); CompilerConfiguration configuration = new CompilerConfiguration(); configuration.addCompilationCustomizers(importCustomizer); return new GroovyShell(configuration); } } Camel will then use your custom GroovyShell instance (containing your custom static imports), instead of the default one. 40.8. Loading script from external resource You can externalize the script and have Camel load it from a resource such as "classpath:" , "file:" , or "http:" . You can achieve this by using the following syntax: `"resource:scheme:location"`, For example, to refer to a file on the classpath you can use the following: .setHeader("myHeader").groovy("resource:classpath:mygroovy.groovy") 40.9. Spring Boot Auto-Configuration The component supports 2 options, which are listed below. Name Description Default Type camel.language.groovy.enabled Whether to enable auto configuration of the groovy language. This is enabled by default. Boolean camel.language.groovy.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-groovy-starter</artifactId> </dependency>", "groovy(\"someGroovyExpression\")", "from(\"queue:foo\") .filter(groovy(\"request.lineItems.any { i -> i.value > 100 }\")) .to(\"queue:bar\")", "<route> <from uri=\"queue:foo\"/> <filter> <groovy>request.lineItems.any { i -> i.value > 100 }</groovy> <to uri=\"queue:bar\"/> </filter> </route>", "bar = \"baz\"; some other statements camel take the result value as the script evaluation result result = body * 2 + 1", "public class CustomGroovyShellFactory implements GroovyShellFactory { public GroovyShell createGroovyShell(Exchange exchange) { ImportCustomizer importCustomizer = new ImportCustomizer(); importCustomizer.addStaticStars(\"com.example.Utils\"); CompilerConfiguration configuration = new CompilerConfiguration(); configuration.addCompilationCustomizers(importCustomizer); return new GroovyShell(configuration); } }", "`\"resource:scheme:location\"`,", ".setHeader(\"myHeader\").groovy(\"resource:classpath:mygroovy.groovy\")" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-groovy-language-starter
Chapter 4. Configuring applications to use cryptographic hardware through PKCS #11
Chapter 4. Configuring applications to use cryptographic hardware through PKCS #11 Separating parts of your secret information about dedicated cryptographic devices, such as smart cards and cryptographic tokens for end-user authentication and hardware security modules (HSM) for server applications, provides an additional layer of security. In RHEL, support for cryptographic hardware through the PKCS #11 API is consistent across different applications, and the isolation of secrets on cryptographic hardware is not a complicated task. 4.1. Cryptographic hardware support through PKCS #11 Public-Key Cryptography Standard (PKCS) #11 defines an application programming interface (API) to cryptographic devices that hold cryptographic information and perform cryptographic functions. PKCS #11 introduces the cryptographic token , an object that presents each hardware or software device to applications in a unified manner. Therefore, applications view devices such as smart cards, which are typically used by persons, and hardware security modules, which are typically used by computers, as PKCS #11 cryptographic tokens. A PKCS #11 token can store various object types including a certificate; a data object; and a public, private, or secret key. These objects are uniquely identifiable through the PKCS #11 Uniform Resource Identifier (URI) scheme. A PKCS #11 URI is a standard way to identify a specific object in a PKCS #11 module according to the object attributes. This enables you to configure all libraries and applications with the same configuration string in the form of a URI. RHEL provides the OpenSC PKCS #11 driver for smart cards by default. However, hardware tokens and HSMs can have their own PKCS #11 modules that do not have their counterpart in the system. You can register such PKCS #11 modules with the p11-kit tool, which acts as a wrapper over the registered smart-card drivers in the system. To make your own PKCS #11 module work on the system, add a new text file to the /etc/pkcs11/modules/ directory You can add your own PKCS #11 module into the system by creating a new text file in the /etc/pkcs11/modules/ directory. For example, the OpenSC configuration file in p11-kit looks as follows: Additional resources The PKCS #11 URI Scheme Controlling access to smart cards 4.2. Authenticating by SSH keys stored on a smart card You can create and store ECDSA and RSA keys on a smart card and authenticate by the smart card on an OpenSSH client. Smart-card authentication replaces the default password authentication. Prerequisites On the client side, the opensc package is installed and the pcscd service is running. Procedure List all keys provided by the OpenSC PKCS #11 module including their PKCS #11 URIs and save the output to the keys.pub file: Transfer the public key to the remote server. Use the ssh-copy-id command with the keys.pub file created in the step: Connect to <ssh-server-example.com> by using the ECDSA key. You can use just a subset of the URI, which uniquely references your key, for example: Because OpenSSH uses the p11-kit-proxy wrapper and the OpenSC PKCS #11 module is registered to the p11-kit tool, you can simplify the command: If you skip the id= part of a PKCS #11 URI, OpenSSH loads all keys that are available in the proxy module. This can reduce the amount of typing required: Optional: You can use the same URI string in the ~/.ssh/config file to make the configuration permanent: The ssh client utility now automatically uses this URI and the key from the smart card. Additional resources p11-kit(8) , opensc.conf(5) , pcscd(8) , ssh(1) , and ssh-keygen(1) man pages on your system 4.3. Configuring applications for authentication with certificates on smart cards Authentication by using smart cards in applications may increase security and simplify automation. You can integrate the Public Key Cryptography Standard (PKCS) #11 URIs into your application by using the following methods: The Firefox web browser automatically loads the p11-kit-proxy PKCS #11 module. This means that every supported smart card in the system is automatically detected. For using TLS client authentication, no additional setup is required and keys and certificates from a smart card are automatically used when a server requests them. If your application uses the GnuTLS or NSS library, it already supports PKCS #11 URIs. Also, applications that rely on the OpenSSL library can access cryptographic hardware modules, including smart cards, through the pkcs11 engine provided by the openssl-pkcs11 package. Applications that require working with private keys on smart cards and that do not use NSS , GnuTLS , nor OpenSSL can use the p11-kit API directly to work with cryptographic hardware modules, including smart cards, rather than using the PKCS #11 API of specific PKCS #11 modules. With the the wget network downloader, you can specify PKCS #11 URIs instead of paths to locally stored private keys and certificates. This might simplify creation of scripts for tasks that require safely stored private keys and certificates. For example: You can also specify PKCS #11 URI when using the curl tool: Note Because a PIN is a security measure that controls access to keys stored on a smart card and the configuration file contains the PIN in the plain-text form, consider additional protection to prevent an attacker from reading the PIN. For example, you can use the pin-source attribute and provide a file: URI for reading the PIN from a file. See RFC 7512: PKCS #11 URI Scheme Query Attribute Semantics for more information. Note that using a command path as a value of the pin-source attribute is not supported. Additional resources curl(1) , wget(1) , and p11-kit(8) man pages on your system 4.4. Using HSMs protecting private keys in Apache The Apache HTTP server can work with private keys stored on hardware security modules (HSMs), which helps to prevent the keys' disclosure and man-in-the-middle attacks. Note that this usually requires high-performance HSMs for busy servers. For secure communication in the form of the HTTPS protocol, the Apache HTTP server ( httpd ) uses the OpenSSL library. OpenSSL does not support PKCS #11 natively. To use HSMs, you have to install the openssl-pkcs11 package, which provides access to PKCS #11 modules through the engine interface. You can use a PKCS #11 URI instead of a regular file name to specify a server key and a certificate in the /etc/httpd/conf.d/ssl.conf configuration file, for example: Install the httpd-manual package to obtain complete documentation for the Apache HTTP Server, including TLS configuration. The directives available in the /etc/httpd/conf.d/ssl.conf configuration file are described in detail in the /usr/share/httpd/manual/mod/mod_ssl.html file. 4.5. Using HSMs protecting private keys in Nginx The Nginx HTTP server can work with private keys stored on hardware security modules (HSMs), which helps to prevent the keys' disclosure and man-in-the-middle attacks. Note that this usually requires high-performance HSMs for busy servers. Because Nginx also uses the OpenSSL for cryptographic operations, support for PKCS #11 must go through the openssl-pkcs11 engine. Nginx currently supports only loading private keys from an HSM, and a certificate must be provided separately as a regular file. Modify the ssl_certificate and ssl_certificate_key options in the server section of the /etc/nginx/nginx.conf configuration file: Note that the engine:pkcs11: prefix is needed for the PKCS #11 URI in the Nginx configuration file. This is because the other pkcs11 prefix refers to the engine name. 4.6. Additional resources pkcs11.conf(5) man page on your system
[ "cat /usr/share/p11-kit/modules/opensc.module module: opensc-pkcs11.so", "ssh-keygen -D pkcs11: > keys.pub", "ssh-copy-id -f -i keys.pub <[email protected]>", "ssh -i \"pkcs11:id=%01?module-path=/usr/lib64/pkcs11/opensc-pkcs11.so\" <ssh-server-example.com> Enter PIN for 'SSH key': [ssh-server-example.com] USD", "ssh -i \"pkcs11:id=%01\" <ssh-server-example.com> Enter PIN for 'SSH key': [ssh-server-example.com] USD", "ssh -i pkcs11: <ssh-server-example.com> Enter PIN for 'SSH key': [ssh-server-example.com] USD", "cat ~/.ssh/config IdentityFile \"pkcs11:id=%01?module-path=/usr/lib64/pkcs11/opensc-pkcs11.so\" ssh <ssh-server-example.com> Enter PIN for 'SSH key': [ssh-server-example.com] USD", "wget --private-key 'pkcs11:token=softhsm;id=%01;type=private?pin-value=111111' --certificate 'pkcs11:token=softhsm;id=%01;type=cert' https://example.com/", "curl --key 'pkcs11:token=softhsm;id=%01;type=private?pin-value=111111' --cert 'pkcs11:token=softhsm;id=%01;type=cert' https://example.com/", "SSLCertificateFile \"pkcs11:id=%01;token=softhsm;type=cert\" SSLCertificateKeyFile \"pkcs11:id=%01;token=softhsm;type=private?pin-value=111111\"", "ssl_certificate /path/to/cert.pem ssl_certificate_key \"engine:pkcs11:pkcs11:token=softhsm;id=%01;type=private?pin-value=111111\";" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/security_hardening/configuring-applications-to-use-cryptographic-hardware-through-pkcs-11_security-hardening
Chapter 1. Documentation moved
Chapter 1. Documentation moved The OpenShift sandboxed containers user guide and release notes have moved to a new location .
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/openshift_sandboxed_containers/sandboxed-containers-moved
Chapter 28. Data sets authoring
Chapter 28. Data sets authoring A data set is a collection of related sets of information and can be stored in a database, in a Microsoft Excel file, or in memory. A data set definition instructs Business Central methods to access, read, and parse a data set. Business Central does not store data. It enables you to define access to a data set regardless of where the data is stored. For example, if data is stored in a database, a valid data set can contain the entire database or a subset of the database as a result of an SQL query. In both cases the data is used as input for the reporting components of Business Central which then displays the information. To access a data set, you must create and register a data set definition. The data set definition specifies the location of the data set, options to access it, read it, and parse it, and the columns that it contains. Note The Data Sets page is visible only to users with the admin role. 28.1. Adding data sets You can create a data set to fetch data from an external data source and use that data for the reporting components. Procedure In Business Central, go to Admin Data Sets . The Data Sets page opens. Click New Data Set and select one of the following provider types: Bean: Generates a data set from a Java class CSV: Generates a data set from a remote or local CSV file SQL: Generates a data set from an ANSI-SQL compliant database Elastic Search: Generates a data set from Elastic Search nodes Prometheus: Generates a data set using the Prometheus query Kafka: Generates a data set using metrics from Kafka broker, consumer, or producer Note You must configure KIE Server for Prometheus , Kafka , and Execution Server options. Complete the Data Set Creation Wizard and click Test . Note The configuration steps differ based on the provider you choose. Click Save . 28.2. Editing data sets You can edit existing data sets to ensure that the data fetched to the reporting components is up-to-date. Procedure In Business Central, go to Admin Data Sets . The Data Set Explorer page opens. In the Data Set Explorer pane, search for the data set you want to edit, select the data set, and click Edit . In the Data Set Editor pane, use the appropriate tab to edit the data as required. The tabs differ based on the data set provider type you chose. For example, the following changes are applicable for editing a CSV data provider: CSV Configuration: Enables you to change the name of the data set definition, the source file, the separator, and other properties. Preview: Enables you to preview the data. After you click Test in the CSV Configuration tab, the system executes the data set lookup call and if the data is available, a preview appears. Note that the Preview tab has two sub-tabs: Data columns: Enables you to specify what columns are part of your data set definition. Filter: Enables you to add a new filter. Advanced: Enables you to manage the following configurations: Caching: See Caching data for more information. Cache life-cycle Enables you to specify an interval of time after which a data set (or data) is refreshed. The Refresh on stale data feature refreshes the cached data when the back-end data changes. After making the required changes, click Validate . Click Save . 28.3. Data refresh The data refresh feature enables you to specify an interval of time after which a data set (or data) is refreshed. You can access the Data refresh every feature on the Advanced tab of the data set. The Refresh on stale data feature refreshes the cached data when the back-end data changes. 28.4. Caching data Business Central provides caching mechanisms for storing data sets and performing data operations using in-memory data. Caching data reduces network traffic, remote system payload, and processing time. To avoid performance issues, configure the cache settings in Business Central. For any data lookup call that results in a data set, the caching method determines where the data lookup call is executed and where the resulting data set is stored. An example of a data lookup call would be all the mortgage applications whose locale parameter is set as "Urban". Business Central data set functionality provides two cache levels: Client level Back-end level You can set the Client Cache and Backend Cache settings on the Advanced tab of the data set. Client cache When the cache is turned on, the data set is cached in a web browser during the lookup operation and further lookup operations do not perform requests to the back-end. Data set operations like grouping, aggregations, filtering, and sorting are processed in the web browser. Enable client caching only if the data set size is small, for example, for data sets with less than 10 MB of data. For large data sets, browser issues such as slow performance or intermittent freezing can occur. Client caching reduces the number of back-end requests including requests to the storage system. Back-end cache When the cache is enabled, the decision engine caches the data set. This reduces the number of back-end requests to the remote storage system. All data set operations are performed in the decision engine using in-memory data. Enable back-end caching only if the data set size is not updated frequently and it can be stored and processed in memory. Using back-end caching is also useful in cases with low latency connectivity issues with the remote storage. Note Back-end cache settings are not always visible in the Advanced tab of the Data Set Editor because Java and CSV data providers rely on back-end caching (data set must be in the memory) in order to resolve any data lookup operation using the in-memory decision engine.
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/managing_red_hat_process_automation_manager_and_kie_server_settings/data-sets-authoring-con_configuring-central
8.4. Common NFS Mount Options
8.4. Common NFS Mount Options Beyond mounting a file system with NFS on a remote host, it is also possible to specify other options at mount time to make the mounted share easier to use. These options can be used with manual mount commands, /etc/fstab settings, and autofs . The following are options commonly used for NFS mounts: lookupcache= mode Specifies how the kernel should manage its cache of directory entries for a given mount point. Valid arguments for mode are all , none , or pos / positive . nfsvers= version Specifies which version of the NFS protocol to use, where version is 3 or 4. This is useful for hosts that run multiple NFS servers. If no version is specified, NFS uses the highest version supported by the kernel and mount command. The option vers is identical to nfsvers , and is included in this release for compatibility reasons. noacl Turns off all ACL processing. This may be needed when interfacing with older versions of Red Hat Enterprise Linux, Red Hat Linux, or Solaris, since the most recent ACL technology is not compatible with older systems. nolock Disables file locking. This setting is sometimes required when connecting to very old NFS servers. noexec Prevents execution of binaries on mounted file systems. This is useful if the system is mounting a non-Linux file system containing incompatible binaries. nosuid Disables set-user-identifier or set-group-identifier bits. This prevents remote users from gaining higher privileges by running a setuid program. port= num Specifies the numeric value of the NFS server port. If num is 0 (the default value), then mount queries the remote host's rpcbind service for the port number to use. If the remote host's NFS daemon is not registered with its rpcbind service, the standard NFS port number of TCP 2049 is used instead. rsize= num and wsize= num These options set the maximum number of bytes to be transfered in a single NFS read or write operation. There is no fixed default value for rsize and wsize . By default, NFS uses the largest possible value that both the server and the client support. In Red Hat Enterprise Linux 7, the client and server maximum is 1,048,576 bytes. For more details, see the What are the default and maximum values for rsize and wsize with NFS mounts? KBase article. sec= flavors Security flavors to use for accessing files on the mounted export. The flavors value is a colon-separated list of one or more security flavors. By default, the client attempts to find a security flavor that both the client and the server support. If the server does not support any of the selected flavors, the mount operation fails. sec=sys uses local UNIX UIDs and GIDs. These use AUTH_SYS to authenticate NFS operations. sec=krb5 uses Kerberos V5 instead of local UNIX UIDs and GIDs to authenticate users. sec=krb5i uses Kerberos V5 for user authentication and performs integrity checking of NFS operations using secure checksums to prevent data tampering. sec=krb5p uses Kerberos V5 for user authentication, integrity checking, and encrypts NFS traffic to prevent traffic sniffing. This is the most secure setting, but it also involves the most performance overhead. tcp Instructs the NFS mount to use the TCP protocol. udp Instructs the NFS mount to use the UDP protocol. For more information, see man mount and man nfs .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/storage_administration_guide/s1-nfs-client-config-options
API overview
API overview OpenShift Container Platform 4.16 Overview content for the OpenShift Container Platform API Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/api_overview/index
Chapter 6. Composable Services and Custom Roles
Chapter 6. Composable Services and Custom Roles The Overcloud usually consists of nodes in predefined roles such as Controller nodes, Compute nodes, and different storage node types. Each of these default roles contains a set of services defined in the core Heat template collection on the director node. However, the architecture of the core Heat templates provide methods to do the following tasks: Create custom roles Add and remove services from each role This allows the possibility to create different combinations of services on different roles. This chapter explores the architecture of custom roles, composable services, and methods for using them. 6.1. Supported Role Architecture The following architectures are available when using custom roles and composable services: Architecture 1 - Default Architecture Uses the default roles_data files. All controller services are contained within one Controller role. Architecture 2 - Supported Standalone Roles Use the predefined files in /usr/share/openstack-tripleo-heat-templates/roles to generate a custom roles_data file`. See Section 6.2.3, "Supported Custom Roles" . Architecture 3 - Custom Composable Services Create your own roles and use them to generate a custom roles_data file. Note that only a limited number of composable service combinations have been tested and verified and Red Hat cannot support all composable service combinations. 6.2. Roles 6.2.1. Examining the roles_data File The Overcloud creation process defines its roles using a roles_data file. The roles_data file contains a YAML-formatted list of the roles. The following is a shortened example of the roles_data syntax: The core Heat template collection contains a default roles_data file located at /usr/share/openstack-tripleo-heat-templates/roles_data.yaml . The default file defines the following role types: Controller Compute BlockStorage ObjectStorage CephStorage . The openstack overcloud deploy command includes this file during deployment. You can override this file with a custom roles_data file using the -r argument. For example: 6.2.2. Creating a roles_data File Although you can manually create a custom roles_data file, you can also automatically generate the file using individual role templates. The director provides several commands to manage role templates and automatically generate a custom roles_data file. To list the default role templates, use the openstack overcloud roles list command: To see the role's YAML definition, use the openstack overcloud roles show command: To generate a custom roles_data file, use the openstack overcloud roles generate command to join multiple predefined roles into a single file. For example, the following command joins the Controller , Compute , and Networker roles into a single file: The -o defines the name of the file to create. This creates a custom roles_data file. However, the example uses the Controller and Networker roles, which both contain the same networking agents. This means the networking services scale from Controller to the Networker role. The overcloud balances the load for networking services between the Controller and Networker nodes. To make this Networker role standalone, you can create your own custom Controller role, as well as any other role needed. This allows you to generate a roles_data file from your own custom roles. Copy the directory from the core Heat template collection to the stack user's home directory: Add or modify the custom role files in this directory. Use the --roles-path option with any of the aforementioned role sub-commands to use this directory as the source for your custom roles. For example: This generates a single my_roles_data.yaml file from the individual roles in the ~/roles directory. Note The default roles collection also contains the ControllerOpenStack role, which does not include services for Networker , Messaging , and Database roles. You can use the ControllerOpenStack combined with with the standalone Networker , Messaging , and Database roles. 6.2.3. Supported Custom Roles The following table contains information about the available custom roles. You can find custom role templates in the /usr/share/openstack-tripleo-heat-templates/roles directory. Role Description File BlockStorage OpenStack Block Storage (cinder) node. BlockStorage.yaml CellController Compute cell for hosting instances. Includes services for the cell conductor, message queue, and database CellController.yaml CephAll Full standalone Ceph Storage node. Includes OSD, MON, Object Gateway (RGW), Object Operations (MDS), Manager (MGR), and RBD Mirroring. CephAll.yaml CephFile Standalone scale-out Ceph Storage file role. Includes OSD and Object Operations (MDS). CephFile.yaml CephObject Standalone scale-out Ceph Storage object role. Includes OSD and Object Gateway (RGW). CephObject.yaml CephStorage Ceph Storage OSD node role. CephStorage.yaml ComputeAlt Alternate Compute node role. ComputeAlt.yaml ComputeDVR DVR enabled Compute node role. ComputeDVR.yaml ComputeHCI Compute node with hyper-converged infrastructure. Includes Compute and Ceph OSD services. ComputeHCI.yaml ComputeInstanceHA Compute Instance HA node role. Use in conjunction with the environments/compute-instanceha.yaml` environment file. ComputeInstanceHA.yaml ComputeLiquidio Compute node with Cavium Liquidio Smart NIC. ComputeLiquidio.yaml ComputeOvsDpdkRT Compute OVS DPDK RealTime role. ComputeOvsDpdkRT.yaml ComputeOvsDpdk Compute OVS DPDK role. ComputeOvsDpdk.yaml ComputePPC64LE Compute role for ppc64le servers. ComputePPC64LE.yaml ComputeRealTime Compute role optimized for real-time behaviour. When using this role, it is mandatory that an overcloud-realtime-compute image is available and the role specific parameters IsolCpusList , NovaComputeCpuDedicatedSet and NovaComputeCpuSharedSet are set according to the hardware of the real-time compute nodes. ComputeRealTime.yaml ComputeSriovRT Compute SR-IOV RealTime role. ComputeSriovRT.yaml ComputeSriov Compute SR-IOV role. ComputeSriov.yaml Compute Standard Compute node role. Compute.yaml ControllerAllNovaStandalone Controller role that does not contain the database, messaging, networking, and OpenStack Compute (nova) control components. Use in combination with the Database , Messaging , Networker , and Novacontrol roles. ControllerAllNovaStandalone.yaml ControllerNoCeph Controller role with core Controller services loaded but no Ceph Storage (MON) components. This role handles database, messaging, and network functions but not any Ceph Storage functions. ControllerNoCeph.yaml ControllerNovaStandalone Controller role that does not contain the OpenStack Compute (nova) control component. Use in combination with the Novacontrol role. ControllerNovaStandalone.yaml ControllerOpenstack Controller role that does not contain the database, messaging, and networking components. Use in combination with the Database , Messaging , and Networker roles. ControllerOpenstack.yaml ControllerStorageNfs Controller role with all core services loaded and uses Ceph NFS. This roles handles database, messaging, and network functions. ControllerStorageNfs.yaml Controller Controller role with all core services loaded. This roles handles database, messaging, and network functions. Controller.yaml Database Standalone database role. Database managed as a Galera cluster using Pacemaker. Database.yaml HciCephAll Compute node with hyper-converged infrastructure and all Ceph Storage services. Includes OSD, MON, Object Gateway (RGW), Object Operations (MDS), Manager (MGR), and RBD Mirroring. HciCephAll.yaml HciCephFile Compute node with hyper-converged infrastructure and Ceph Storage file services. Includes OSD and Object Operations (MDS). HciCephFile.yaml HciCephMon Compute node with hyper-converged infrastructure and Ceph Storage block services. Includes OSD, MON, and Manager. HciCephMon.yaml HciCephObject Compute node with hyper-converged infrastructure and Ceph Storage object services. Includes OSD and Object Gateway (RGW). HciCephObject.yaml IronicConductor Ironic Conductor node role. IronicConductor.yaml Messaging Standalone messaging role. RabbitMQ managed with Pacemaker. Messaging.yaml Networker (ML2/OVS) Standalone networking role under ML2/OVS. Runs OpenStack networking (neutron) agents on their own. If your deployment uses the ML2/OVN mechanism driver, see Creating a Custom Networker Role with ML2/OVN . Networker.yaml Novacontrol Standalone nova-control role to run OpenStack Compute (nova) control agents on their own. Novacontrol.yaml ObjectStorage Swift Object Storage node role. ObjectStorage.yaml Telemetry Telemetry role with all the metrics and alarming services. Telemetry.yaml 6.2.4. Creating a Custom Networker Role with ML2/OVN To deploy a custom networker role when your deployment uses the ML2/OVN mechanism driver, you must use an environment file to set the parameter for the role on networker nodes and clear it on controller nodes. Use an environment file such as neutron-ovn-dvr-ha.yaml . Procedure On controller nodes, clear OVNCMSOptions : On networker nodes, set OVNCMSOptions to 'enable-chassis-as-gw' : 6.2.5. Examining Role Parameters Each role uses the following parameters: name (Mandatory) The name of the role, which is a plain text name with no spaces or special characters. Check that the chosen name does not cause conflicts with other resources. For example, use Networker as a name instead of Network . description (Optional) A plain text description for the role. tags (Optional) A YAML list of tags that define role properties. Use this parameter to define the primary role with both the controller and primary tags together: Important If you do not tag the primary role, the first role defined becomes the primary role. Ensure that this role is the Controller role. networks A YAML list or dictionary of networks to configure on the role. If using a YAML list, list each composable network: If using a dictionary, map each network to a specific subnet in your composable networks. Default networks include External , InternalApi , Storage , StorageMgmt , Tenant , and Management . CountDefault (Optional) Defines the default number of nodes to deploy for this role. HostnameFormatDefault (Optional) Defines the default hostname format for the role. The default naming convention uses the following format: For example, the default Controller nodes are named: disable_constraints (Optional) Defines whether to disable OpenStack Compute (nova) and OpenStack Image Storage (glance) constraints when deploying with the director. Used when deploying an overcloud with pre-provisioned nodes. For more information, see Configuring a basic overcloud with pre-provisioned nodes in the Director Installation and Usage guide. update_serial (Optional) Defines how many nodes to update simultaneously during the OpenStack update options. In the default roles_data.yaml file: The default is 1 for Controller, Object Storage, and Ceph Storage nodes. The default is 25 for Compute and Block Storage nodes. If you omit this parameter from a custom role, the default is 1 . ServicesDefault (Optional) Defines the default list of services to include on the node. See Section 6.3.2, "Examining Composable Service Architecture" for more information. These parameters provide a means to create new roles and also define which services to include. The openstack overcloud deploy command integrates the parameters from the roles_data file into some of the Jinja2-based templates. For example, at certain points, the overcloud.j2.yaml Heat template iterates over the list of roles from roles_data.yaml and creates parameters and resources specific to each respective role. The resource definition for each role in the overcloud.j2.yaml Heat template appears as the following snippet: This snippet shows how the Jinja2-based template incorporates the {{role.name}} variable to define the name of each role as a OS::Heat::ResourceGroup resource. This in turn uses each name parameter from the roles_data file to name each respective OS::Heat::ResourceGroup resource. 6.2.6. Creating a New Role In this example, the aim is to create a new Horizon role to host the OpenStack Dashboard ( horizon ) only. In this situation, you create a custom roles directory that includes the new role information. Create a custom copy of the default roles directory: Create a new file called ~/roles/Horizon.yaml and create a new Horizon role containing base and core OpenStack Dashboard services. For example: It is a good idea to set the CountDefault to 1 so that a default Overcloud always includes the Horizon node. If scaling the services in an existing overcloud, keep the existing services on the Controller role. If creating a new overcloud and you want the OpenStack Dashboard to remain on the standalone role, remove the OpenStack Dashboard components from the Controller role definition: Generate the new roles_data file using the roles directory as the source: You might need to define a new flavor for this role so that you can tag specific nodes. For this example, use the following commands to create a horizon flavor: Tag nodes into the new flavor using the following command: Define the Horizon node count and flavor using the following environment file snippet: Include the new roles_data file and environment file when running the openstack overcloud deploy command. For example: When the deployment completes, this creates a three-node Overcloud consisting of one Controller node, one Compute node, and one Networker node. To view the Overcloud's list of nodes, run the following command: 6.3. Composable Services 6.3.1. Guidelines and Limitations Note the following guidelines and limitations for the composable node architecture. For services not managed by Pacemaker: You can assign services to standalone custom roles. You can create additional custom roles after the initial deployment and deploy them to scale existing services. For services managed by Pacemaker: You can assign Pacemaker-managed services to standalone custom roles. Pacemaker has a 16 node limit. If you assign the Pacemaker service ( OS::TripleO::Services::Pacemaker ) to 16 nodes, subsequent nodes must use the Pacemaker Remote service ( OS::TripleO::Services::PacemakerRemote ) instead. You cannot have the Pacemaker service and Pacemaker Remote service on the same role. Do not include the Pacemaker service ( OS::TripleO::Services::Pacemaker ) on roles that do not contain Pacemaker-managed services. You cannot scale up or scale down a custom role that contains OS::TripleO::Services::Pacemaker or OS::TripleO::Services::PacemakerRemote services. General limitations: You cannot change custom roles and composable services during the a major version upgrade. You cannot modify the list of services for any role after deploying an Overcloud. Modifying the service lists after Overcloud deployment can cause deployment errors and leave orphaned services on nodes. 6.3.2. Examining Composable Service Architecture The core Heat template collection contains two sets of composable service templates: deployment contains the templates for key OpenStack Platform services. puppet/services contains legacy templates for configuring composable services. In some cases, the composable services use templates from this directory for compatibility. In most cases, the composable services use the templates in the deployment directory. Each template contains a description that identifies its purpose. For example, the deployment/time/ntp-baremetal-puppet.yaml service template contains the following description: These service templates are registered as resources specific to a Red Hat OpenStack Platform deployment. This means you can call each resource using a unique Heat resource namespace defined in the overcloud-resource-registry-puppet.j2.yaml file. All services use the OS::TripleO::Services namespace for their resource type. Some resources use the base composable service templates directly. For example: However, core services require containers and use the containerized service templates. For example, the keystone containerized service uses the following resource: These containerized templates usually reference other templates to include dependencies. For example, the deployment/keystone/keystone-container-puppet.yaml template stores the output of the base template in the ContainersCommon resource: The containerized template can then incorporate functions and data from the containers-common.yaml template. The overcloud.j2.yaml Heat template includes a section of Jinja2-based code to define a service list for each custom role in the roles_data.yaml file: For the default roles, this creates the following service list parameters: ControllerServices , ComputeServices , BlockStorageServices , ObjectStorageServices , and CephStorageServices . You define the default services for each custom role in the roles_data.yaml file. For example, the default Controller role contains the following content: These services are then defined as the default list for the ControllerServices parameter. Note You can also use an environment file to override the default list for the service parameters. For example, you can define ControllerServices as a parameter_default in an environment file to override the services list from the roles_data.yaml file. 6.3.3. Adding and Removing Services from Roles The basic method of adding or removing services involves creating a copy of the default service list for a node role and then adding or removing services. For example, you might aim to remove OpenStack Orchestration ( heat ) from the Controller nodes. In this situation, create a custom copy of the default roles directory: Edit the ~/roles/Controller.yaml file and modify the service list for the ServicesDefault parameter. Scroll to the OpenStack Orchestration services and remove them: Generate the new roles_data file. For example: Include this new roles_data file when running the openstack overcloud deploy command. For example: This deploys an Overcloud without OpenStack Orchestration services installed on the Controller nodes. Note You can also disable services in the roles_data file using a custom environment file. Redirect the services to disable to the OS::Heat::None resource. For example: 6.3.4. Enabling Disabled Services Some services are disabled by default. These services are registered as null operations ( OS::Heat::None ) in the overcloud-resource-registry-puppet.j2.yaml file. For example, the Block Storage backup service ( cinder-backup ) is disabled: To enable this service, include an environment file that links the resource to its respective Heat templates in the puppet/services directory. Some services have predefined environment files in the environments directory. For example, the Block Storage backup service uses the environments/cinder-backup.yaml file, which contains the following: This overrides the default null operation resource and enables the service. Include this environment file when running the openstack overcloud deploy command. 6.3.5. Creating a Generic Node with No Services Red Hat OpenStack Platform provides the ability to create generic Red Hat Enterprise Linux 8 nodes without any OpenStack services configured. This is useful when you need to host software outside of the core Red Hat OpenStack Platform environment. For example, OpenStack Platform provides integration with monitoring tools such as Kibana and Sensu, see Monitoring Tools Configuration Guide . While Red Hat does not provide support for the monitoring tools themselves, the director can create a generic Red Hat Enterprise Linux 8 node to host these tools. Note The generic node still uses the base overcloud-full image rather than a base Red Hat Enterprise Linux 8 image. This means the node has some Red Hat OpenStack Platform software installed but not enabled or configured. Creating a generic node requires a new role without a ServicesDefault list: Include the role in your custom roles_data file ( roles_data_with_generic.yaml ). Make sure to keep the existing Controller and Compute roles. You can also include an environment file ( generic-node-params.yaml ) to specify how many generic Red Hat Enterprise Linux 8 nodes you require and the flavor when selecting nodes to provision. For example: Include both the roles file and the environment file when running the openstack overcloud deploy command. For example: This deploys a three-node environment with one Controller node, one Compute node, and one generic Red Hat Enterprise Linux 8 node.
[ "- name: Controller description: | Controller role that has all the controller services loaded and handles Database, Messaging and Network functions. ServicesDefault: - OS::TripleO::Services::AuditD - OS::TripleO::Services::CACerts - OS::TripleO::Services::CephClient - name: Compute description: | Basic Compute Node role ServicesDefault: - OS::TripleO::Services::AuditD - OS::TripleO::Services::CACerts - OS::TripleO::Services::CephClient", "openstack overcloud deploy --templates -r ~/templates/roles_data-custom.yaml", "openstack overcloud roles list BlockStorage CephStorage Compute ComputeHCI ComputeOvsDpdk Controller", "openstack overcloud roles show Compute", "openstack overcloud roles generate -o ~/roles_data.yaml Controller Compute Networker", "cp -r /usr/share/openstack-tripleo-heat-templates/roles ~/.", "openstack overcloud roles generate -o my_roles_data.yaml --roles-path ~/roles Controller Compute Networker", "ControllerParameters: OVNCMSOptions: \"\"", "NetworkerParameters: OVNCMSOptions: \"enable-chassis-as-gw\"", "- name: Controller tags: - primary - controller", "networks: - External - InternalApi - Storage - StorageMgmt - Tenant", "networks: External: subnet: external_subnet InternalApi: subnet: internal_api_subnet Storage: subnet: storage_subnet StorageMgmt: subnet: storage_mgmt_subnet Tenant: subnet: tenant_subnet", "[STACK NAME]-[ROLE NAME]-[NODE ID]", "overcloud-controller-0 overcloud-controller-1 overcloud-controller-2", "{{role.name}}: type: OS::Heat::ResourceGroup depends_on: Networks properties: count: {get_param: {{role.name}}Count} removal_policies: {get_param: {{role.name}}RemovalPolicies} resource_def: type: OS::TripleO::{{role.name}} properties: CloudDomain: {get_param: CloudDomain} ServiceNetMap: {get_attr: [ServiceNetMap, service_net_map]} EndpointMap: {get_attr: [EndpointMap, endpoint_map]}", "cp -r /usr/share/openstack-tripleo-heat-templates/roles ~/.", "- name: Horizon CountDefault: 1 HostnameFormatDefault: '%stackname%-horizon-%index%' ServicesDefault: - OS::TripleO::Services::CACerts - OS::TripleO::Services::Kernel - OS::TripleO::Services::Ntp - OS::TripleO::Services::Snmp - OS::TripleO::Services::Sshd - OS::TripleO::Services::Timezone - OS::TripleO::Services::TripleoPackages - OS::TripleO::Services::TripleoFirewall - OS::TripleO::Services::SensuClient - OS::TripleO::Services::FluentdClient - OS::TripleO::Services::AuditD - OS::TripleO::Services::Collectd - OS::TripleO::Services::MySQLClient - OS::TripleO::Services::Apache - OS::TripleO::Services::Horizon", "- name: Controller CountDefault: 1 ServicesDefault: - OS::TripleO::Services::GnocchiMetricd - OS::TripleO::Services::GnocchiStatsd - OS::TripleO::Services::HAproxy - OS::TripleO::Services::HeatApi - OS::TripleO::Services::HeatApiCfn - OS::TripleO::Services::HeatApiCloudwatch - OS::TripleO::Services::HeatEngine # - OS::TripleO::Services::Horizon # Remove this service - OS::TripleO::Services::IronicApi - OS::TripleO::Services::IronicConductor - OS::TripleO::Services::Iscsid - OS::TripleO::Services::Keepalived", "openstack overcloud roles generate -o roles_data-horizon.yaml --roles-path ~/roles Controller Compute Horizon", "openstack flavor create --id auto --ram 6144 --disk 40 --vcpus 4 horizon openstack flavor set --property \"cpu_arch\"=\"x86_64\" --property \"capabilities:boot_option\"=\"local\" --property \"capabilities:profile\"=\"horizon\" horizon openstack flavor set --property resources:VCPU=0 --property resources:MEMORY_MB=0 --property resources:DISK_GB=0 --property resources:CUSTOM_BAREMETAL=1 horizon", "openstack baremetal node set --property capabilities='profile:horizon,boot_option:local' 58c3d07e-24f2-48a7-bbb6-6843f0e8ee13", "parameter_defaults: OvercloudHorizonFlavor: horizon HorizonCount: 1", "openstack overcloud deploy --templates -r ~/templates/roles_data-horizon.yaml -e ~/templates/node-count-flavor.yaml", "openstack server list", "description: > NTP service deployment using puppet, this YAML file creates the interface between the HOT template and the puppet manifest that actually installs and configure NTP.", "resource_registry: OS::TripleO::Services::Ntp: deployment/time/ntp-baremetal-puppet.yaml", "resource_registry: OS::TripleO::Services::Keystone: deployment/keystone/keystone-container-puppet.yaml", "resources: ContainersCommon: type: ../containers-common.yaml", "{{role.name}}Services: description: A list of service resources (configured in the Heat resource_registry) which represent nested stacks for each service that should get installed on the {{role.name}} role. type: comma_delimited_list default: {{role.ServicesDefault|default([])}}", "- name: Controller CountDefault: 1 ServicesDefault: - OS::TripleO::Services::CACerts - OS::TripleO::Services::CephMon - OS::TripleO::Services::CephExternal - OS::TripleO::Services::CephRgw - OS::TripleO::Services::CinderApi - OS::TripleO::Services::CinderBackup - OS::TripleO::Services::CinderScheduler - OS::TripleO::Services::CinderVolume - OS::TripleO::Services::Core - OS::TripleO::Services::Kernel - OS::TripleO::Services::Keystone - OS::TripleO::Services::GlanceApi - OS::TripleO::Services::GlanceRegistry", "cp -r /usr/share/openstack-tripleo-heat-templates/roles ~/.", "- OS::TripleO::Services::GlanceApi - OS::TripleO::Services::GlanceRegistry - OS::TripleO::Services::HeatApi # Remove this service - OS::TripleO::Services::HeatApiCfn # Remove this service - OS::TripleO::Services::HeatApiCloudwatch # Remove this service - OS::TripleO::Services::HeatEngine # Remove this service - OS::TripleO::Services::MySQL - OS::TripleO::Services::NeutronDhcpAgent", "openstack overcloud roles generate -o roles_data-no_heat.yaml --roles-path ~/roles Controller Compute Networker", "openstack overcloud deploy --templates -r ~/templates/roles_data-no_heat.yaml", "resource_registry: OS::TripleO::Services::HeatApi: OS::Heat::None OS::TripleO::Services::HeatApiCfn: OS::Heat::None OS::TripleO::Services::HeatApiCloudwatch: OS::Heat::None OS::TripleO::Services::HeatEngine: OS::Heat::None", "OS::TripleO::Services::CinderBackup: OS::Heat::None", "resource_registry: OS::TripleO::Services::CinderBackup: ../podman/services/pacemaker/cinder-backup.yaml", "openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/cinder-backup.yaml", "- name: Generic", "parameter_defaults: OvercloudGenericFlavor: baremetal GenericCount: 1", "openstack overcloud deploy --templates -r ~/templates/roles_data_with_generic.yaml -e ~/templates/generic-node-params.yaml" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/advanced_overcloud_customization/Chap-Roles
Chapter 4. Installing a cluster on IBM Power Virtual Server with customizations
Chapter 4. Installing a cluster on IBM Power Virtual Server with customizations In OpenShift Container Platform version 4.15, you can install a customized cluster on infrastructure that the installation program provisions on IBM Power Virtual Server. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. 4.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an IBM Cloud(R) account to host the cluster. If you use a firewall, you configured it to allow the sites that your cluster requires access to. You configured the ccoctl utility before you installed the cluster. For more information, see Configuring the Cloud Credential Operator utility . 4.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.15, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 4.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 4.4. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 4.5. Exporting the API key You must set the API key you created as a global variable; the installation program ingests the variable during startup to set the API key. Prerequisites You have created either a user API key or service ID API key for your IBM Cloud(R) account. Procedure Export your API key for your account as a global variable: USD export IBMCLOUD_API_KEY=<api_key> Important You must set the variable name exactly as specified; the installation program expects the variable name to be present during startup. 4.6. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Note Always delete the ~/.powervs directory to avoid reusing a stale configuration. Run the following command: USD rm -rf ~/.powervs At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select powervs as the platform to target. Select the region to deploy the cluster to. Select the zone to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. Enter a descriptive name for your cluster. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for IBM Power(R) Virtual Server 4.6.1. Sample customized install-config.yaml file for IBM Power Virtual Server You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com compute: 1 2 - architecture: ppc64le hyperthreading: Enabled 3 name: worker platform: powervs: smtLevel: 8 4 replicas: 3 controlPlane: 5 6 architecture: ppc64le hyperthreading: Enabled 7 name: master platform: powervs: smtLevel: 8 8 replicas: 3 metadata: creationTimestamp: null name: example-cluster-name networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 192.168.0.0/24 networkType: OVNKubernetes 9 serviceNetwork: - 172.30.0.0/16 platform: powervs: userID: ibm-user-id region: powervs-region zone: powervs-zone powervsResourceGroup: "ibmcloud-resource-group" 10 serviceInstanceGUID: "powervs-region-service-instance-guid" vpcRegion : vpc-region publish: External pullSecret: '{"auths": ...}' 11 sshKey: ssh-ed25519 AAAA... 12 1 5 If you do not provide these parameters and values, the installation program provides the default value. 2 6 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Although both sections currently define a single machine pool, it is possible that OpenShift Container Platform will support defining multiple compute pools during installation. Only one control plane pool is used. 3 7 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. 4 8 The smtLevel specifies the level of SMT to set to the control plane and compute machines. The supported values are 1, 2, 4, 8, 'off' and 'on' . The default value is 8. The smtLevel 'off' sets SMT to off and smtlevel 'on' sets SMT to the default value 8 on the cluster nodes. Note When simultaneous multithreading (SMT), or hyperthreading is not enabled, one vCPU is equivalent to one physical core. When enabled, total vCPUs is computed as: (Thread(s) per core * Core(s) per socket) * Socket(s). The smtLevel controls the threads per core. Lower SMT levels may require additional assigned cores when deploying the cluster nodes. You can do this by setting the 'processors' parameter in the install-config.yaml file to an appropriate value to meet the requirements for deploying OpenShift Container Platform successfully. 9 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 10 The name of an existing resource group. 11 Required. The installation program prompts you for this value. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 4.6.2. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 4.7. Manually creating IAM Installing the cluster requires that the Cloud Credential Operator (CCO) operate in manual mode. While the installation program configures the CCO for manual mode, you must specify the identity and access management secrets for you cloud provider. You can use the Cloud Credential Operator (CCO) utility ( ccoctl ) to create the required IBM Cloud(R) resources. Prerequisites You have configured the ccoctl binary. You have an existing install-config.yaml file. Procedure Edit the install-config.yaml configuration file so that it contains the credentialsMode parameter set to Manual . Example install-config.yaml configuration file apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: ppc64le hyperthreading: Enabled 1 This line is added to set the credentialsMode parameter to Manual . To generate the manifests, run the following command from the directory that contains the installation program: USD ./openshift-install create manifests --dir <installation_directory> From the directory that contains the installation program, set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: "1.0" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer Create the service ID for each credential request, assign the policies defined, create an API key, and generate the secret: USD ccoctl ibmcloud create-service-id \ --credentials-requests-dir=<path_to_credential_requests_directory> \ 1 --name=<cluster_name> \ 2 --output-dir=<installation_directory> \ 3 --resource-group-name=<resource_group_name> 4 1 Specify the directory containing the files for the component CredentialsRequest objects. 2 Specify the name of the OpenShift Container Platform cluster. 3 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 4 Optional: Specify the name of the resource group used for scoping the access policies. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. If an incorrect resource group name is provided, the installation fails during the bootstrap phase. To find the correct resource group name, run the following command: USD grep resourceGroup <installation_directory>/manifests/cluster-infrastructure-02-config.yml Verification Ensure that the appropriate secrets were generated in your cluster's manifests directory. 4.8. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 4.9. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.15. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.15 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 4.10. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources Accessing the web console 4.11. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.15, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources About remote health monitoring 4.12. steps Customize your cluster If necessary, you can opt out of remote health reporting
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "export IBMCLOUD_API_KEY=<api_key>", "./openshift-install create install-config --dir <installation_directory> 1", "rm -rf ~/.powervs", "apiVersion: v1 baseDomain: example.com compute: 1 2 - architecture: ppc64le hyperthreading: Enabled 3 name: worker platform: powervs: smtLevel: 8 4 replicas: 3 controlPlane: 5 6 architecture: ppc64le hyperthreading: Enabled 7 name: master platform: powervs: smtLevel: 8 8 replicas: 3 metadata: creationTimestamp: null name: example-cluster-name networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 192.168.0.0/24 networkType: OVNKubernetes 9 serviceNetwork: - 172.30.0.0/16 platform: powervs: userID: ibm-user-id region: powervs-region zone: powervs-zone powervsResourceGroup: \"ibmcloud-resource-group\" 10 serviceInstanceGUID: \"powervs-region-service-instance-guid\" vpcRegion : vpc-region publish: External pullSecret: '{\"auths\": ...}' 11 sshKey: ssh-ed25519 AAAA... 12", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: ppc64le hyperthreading: Enabled", "./openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: \"1.0\" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer", "ccoctl ibmcloud create-service-id --credentials-requests-dir=<path_to_credential_requests_directory> \\ 1 --name=<cluster_name> \\ 2 --output-dir=<installation_directory> \\ 3 --resource-group-name=<resource_group_name> 4", "grep resourceGroup <installation_directory>/manifests/cluster-infrastructure-02-config.yml", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_on_ibm_power_virtual_server/installing-ibm-power-vs-customizations
Chapter 7. Bucket policies in the Multicloud Object Gateway
Chapter 7. Bucket policies in the Multicloud Object Gateway OpenShift Data Foundation supports AWS S3 bucket policies. Bucket policies allow you to grant users access permissions for buckets and the objects in them. 7.1. Introduction to bucket policies Bucket policies are an access policy option available for you to grant permission to your AWS S3 buckets and objects. Bucket policies use JSON-based access policy language. For more information about access policy language, see AWS Access Policy Language Overview . 7.2. Using bucket policies in Multicloud Object Gateway Prerequisites A running OpenShift Data Foundation Platform. Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications . A valid Multicloud Object Gateway user account. See Creating a user in the Multicloud Object Gateway for instructions to create a user account. Procedure To use bucket policies in the MCG: Create the bucket policy in JSON format. For example: Replace [email protected] with a valid Multicloud Object Gateway user account. Using AWS S3 client, use the put-bucket-policy command to apply the bucket policy to your S3 bucket: Replace ENDPOINT with the S3 endpoint. Replace MyBucket with the bucket to set the policy on. Replace BucketPolicy with the bucket policy JSON file. Add --no-verify-ssl if you are using the default self signed certificates. For example: For more information on the put-bucket-policy command, see the AWS CLI Command Reference for put-bucket-policy . Note The principal element specifies the user that is allowed or denied access to a resource, such as a bucket. Currently, Only NooBaa accounts can be used as principals. In the case of object bucket claims, NooBaa automatically create an account obc-account.<generated bucket name>@noobaa.io . Note Bucket policy conditions are not supported. Additional resources There are many available elements for bucket policies with regard to access permissions. For details on these elements and examples of how they can be used to control the access permissions, see AWS Access Policy Language Overview . For more examples of bucket policies, see AWS Bucket Policy Examples . 7.3. Creating a user in the Multicloud Object Gateway Prerequisites A running OpenShift Data Foundation Platform. Download the MCG command-line interface for easier management. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. For IBM Power, use the following command: For IBM Z, use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found at Download RedHat OpenShift Data Foundation page . Note Choose the correct Product Variant according to your architecture. Procedure Execute the following command to create an MCG user account: <noobaa-account-name> Specify the name of the new MCG user account. --allow_bucket_create Allows the user to create new buckets. --allowed_buckets Sets the user's allowed bucket list (use commas or multiple flags). --default_resource Sets the default resource.The new buckets are created on this default resource (including the future ones). --full_permission Allows this account to access all existing and future buckets. Important You need to provide permission to access atleast one bucket or full permission to access all the buckets.
[ "{ \"Version\": \"NewVersion\", \"Statement\": [ { \"Sid\": \"Example\", \"Effect\": \"Allow\", \"Principal\": [ \"[email protected]\" ], \"Action\": [ \"s3:GetObject\" ], \"Resource\": [ \"arn:aws:s3:::john_bucket\" ] } ] }", "aws --endpoint ENDPOINT --no-verify-ssl s3api put-bucket-policy --bucket MyBucket --policy file:// BucketPolicy", "aws --endpoint https://s3-openshift-storage.apps.gogo44.noobaa.org --no-verify-ssl s3api put-bucket-policy -bucket MyBucket --policy file://BucketPolicy", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms", "noobaa account create <noobaa-account-name> [--allow_bucket_create=true] [--allowed_buckets=[]] [--default_resource=''] [--full_permission=false]" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/managing_hybrid_and_multicloud_resources/bucket-policies-in-the-multicloud-object-gateway
Chapter 12. Integrating with S3 API compatible services
Chapter 12. Integrating with S3 API compatible services Red Hat Advanced Cluster Security for Kubernetes can be integrated with S3 API compatible services to enable data backups. These backups can be used for data restoration in the case of an infrastructure disaster or corrupt data. After integrating with the S3 API compatible provider, you can schedule daily or weekly backups and do manual on-demand backups. The backup includes the entire RHACS database, which includes all configurations, resources, events, and certificates. Make sure that backups are stored securely. Important To back up to Amazon S3, use the dedicated Amazon S3 integration to ensure the best compatibility. Red Hat does not test this integration with every S3 API compatible provider, so the integration is not guaranteed to work with all providers. 12.1. Configuring S3 API compatible integrations in Red Hat Advanced Cluster Security for Kubernetes To configure S3 API compatible backups, create a new integration in Red Hat Advanced Cluster Security for Kubernetes. Prerequisites You have configured an existing S3 bucket. To create a new bucket with required permissions, see your S3 provider documentation. You have read , write , and delete permissions for the S3 bucket, the Access key ID , and the Secret access key . Procedure In the RHACS portal, go to Platform Configuration Integrations . Scroll down to the External backups section and select S3 API Compatible . Click New Integration . Enter a name for Integration Name . Enter the number of backups to retain in the Backups To Retain box. For Schedule , select the backup frequency as daily or weekly, and select the time to run the backup process. Enter the Bucket name where you want to store the backup. Optionally, enter an Object Prefix if you want to save the backups in a specific folder structure. Enter the Endpoint under which the S3 compatible service is reachable. If no scheme is specified, the default, https , is used. Enter the Region for the bucket. Consult your provider's documentation to enter the correct region. Select the URL style : Virtual hosted style buckets are addressed as https://<bucket>.<endpoint> . Path style buckets are addressed as https://<endpoint>/<bucket> . Enter the Access Key ID and the Secret Access Key . Select Test to confirm that the integration with the S3 is working. Select Create to generate the configuration. After the integration is configured, RHACS automatically backs up all data according to the specified schedule. 12.2. Performing on-demand backups on an S3 API compatible bucket Use the Red Hat Advanced Cluster Security for Kubernetes portal to trigger manual backups of RHACS to an S3 API compatible bucket. Prerequisites You have integrated RHACS with an S3 API compatible service. Procedure In the RHACS portal, go to Platform Configuration Integrations . In the External backups section, click S3 API Compatible . Select the integration name for the S3 bucket where you want to do a backup. Click Trigger backup . Note When you select Trigger backup , there is no notification. However, RHACS begins the backup task in the background. 12.3. Additional resources Backing up Red Hat Advanced Cluster Security for Kubernetes Restoring from a backup
null
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/integrating/integrate-with-s3-api-compatible-services
7.3. Creating a Red Hat Enterprise Linux 6 Guest with PXE
7.3. Creating a Red Hat Enterprise Linux 6 Guest with PXE Procedure 7.3. Creating a Red Hat Enterprise Linux 6 guest with virt-manager Optional: Preparation Prepare the storage environment for the virtual machine. For more information on preparing storage, refer to the Red Hat Enterprise Linux 6 Virtualization Administration Guide . Important Various storage types may be used for storing guest virtual machines. However, for a virtual machine to be able to use migration features the virtual machine must be created on networked storage. Red Hat Enterprise Linux 6 requires at least 1GB of storage space. However, Red Hat recommends at least 5GB of storage space for a Red Hat Enterprise Linux 6 installation and for the procedures in this guide. Open virt-manager and start the wizard Open virt-manager by executing the virt-manager command as root or opening Applications System Tools Virtual Machine Manager . Figure 7.15. The main virt-manager window Click on the Create new virtualized guest button to start the new virtualized guest wizard. Figure 7.16. The create new virtualized guest button The New VM window opens. Name the virtual machine Virtual machine names can contain letters, numbers and the following characters: ' _ ', ' . ' and ' - '. Virtual machine names must be unique for migration and cannot consist only of numbers. Choose the installation method from the list of radio buttons. Figure 7.17. The New VM window - Step 1 Click Forward to continue. The remaining steps are the same as the ISO installation procedure. Continue from Step 5 of the ISO installation procedure. From this point, the only difference in this PXE procedure is on the final New VM screen, which shows the Install: PXE Install field. Figure 7.18. The New VM window - Step 5 - PXE Install
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_host_configuration_and_guest_installation_guide/sec-virtualization_host_configuration_and_guest_installation_guide-rhel6_install-rhel6guest_with_pxe
Chapter 3. Installing a cluster quickly on GCP
Chapter 3. Installing a cluster quickly on GCP In OpenShift Container Platform version 4.16, you can install a cluster on Google Cloud Platform (GCP) that uses the default configuration options. 3.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured a GCP project to host the cluster. If you use a firewall, you configured it to allow the sites that your cluster requires access to. 3.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.16, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 3.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 3.4. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 3.5. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Remove any existing GCP credentials that do not use the service account key for the GCP account that you configured for your cluster and that are stored in the following locations: The GOOGLE_CREDENTIALS , GOOGLE_CLOUD_KEYFILE_JSON , or GCLOUD_KEYFILE_JSON environment variables The ~/.gcp/osServiceAccount.json file The gcloud cli default credentials Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. 2 To view different installation details, specify warn , debug , or error instead of info . When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Provide values at the prompts: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select gcp as the platform to target. If you have not configured the service account key for your GCP account on your host, you must obtain it from GCP and paste the contents of the file or enter the absolute path to the file. Select the project ID to provision the cluster in. The default value is specified by the service account that you configured. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. Enter a descriptive name for your cluster. If you provide a name that is longer than 6 characters, only the first 6 characters will be used in the infrastructure ID that is generated from the cluster name. Paste the pull secret from Red Hat OpenShift Cluster Manager . Optional: You can reduce the number of permissions for the service account that you used to install the cluster. If you assigned the Owner role to your service account, you can remove that role and replace it with the Viewer role. If you included the Service Account Key Admin role, you can remove it. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 3.6. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.16. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.16 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 3.7. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 3.8. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.16, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 3.9. steps Customize your cluster . If necessary, you can opt out of remote health reporting .
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installing_on_gcp/installing-gcp-default